Combining MongoDB and a GraphDB like Neo4J - mongodb

As part of a CMS I'm developing I've got MongoDB as the primary datastore which feeds to ElasticSearch and Redis. All this is configured decleratively.
I'm currently trying to develop a declarative api in JSON (A DSL of sorts) which, when implemented, will enable me to write uniform queries in JSON, but at the backend these datastores work in tandem to come up with the result. Federated search if you will.
Now, while fleshing out the supported types of queries for this Json api, I've come across a class of queries not (efficiently) supported by my current setup: graph-based queries, like friend-of-friend, RDF-queries, etc. Something I'd like to support as well.
So I'm looking for a way to introduce a GraphDB into this ecosystem with the best fit. I should probably say the app-layer sits in Node.js.
I've come across lots of articles comparing Neo4J (a popular GraphDB) vs MongoDB, but not so much of actual use-cases, real world scenarios in which the 2 are complemented.
Any pointers highly appreciated.

You might want to take a look at structr[1], which has a RESTful graph database backend that you can configure using Java beans. In future versions, there will be a configuration option using REST calls only, so that you can fire up a structr server and configure and use it as a standalone graph database backend.
Just contact us on twitter or via email.
(disclaimer: I'm one of the developers of structr, so this comment may not be 100% impartial :))
[1] http://structr.org

The databases are very much complementary.
Use MongoDB to store your raw data/system of record and load the raw data into Neo4j for additional insights/analysis. When you are dealing with unstructured data, you want to store the information in a datastore which is conducive to unstructure data - MongoDB fits the bill (as does other similar NOSQL databases). While Neo4j is considered a NOSQL database, it doesn't fit the bill for unstructured data. Because you have to determine what is a relationship, what is a node, and what properties are stored for each - it's better suited when you have semi-structured data and some understanding of the type of analysis you want to do.
A great architecture is store your unstructured data in MongoDB and use jobs to load them into Neo4j. This allows you to re-load your graph if you figure out there are new pieces of information you'd like to store in the graph for additional analysis.
They are definitely NOT replacements for each other. They fit very different use cases.

Related

Is mongodb the right choice for building health related web application?

I'm not sure if stack-overflow is a right platform to ask the title question. I'm in dilemma as to which front-end and back-end stack should i consider for developing a health related web application?
I heartily appreciate any suggestions or recommendations. Thanks.
You will need to have a look at your data, if it is relational, I would personally go for a SQL server such Microsoft SQL Server, MySQL or Postgres. If your data is non-relational you can go for something like Mongo.
Here is an image that explains how relational data and non-relational data work:
I'm not saying that MongoDB is bad, it all depends on your data and how you would like to structure your data. Obviously when you're working with healthcare data such as patient data there are certain laws you need to adhere to, especially in the United States with HIPPA, but I am sure almost every country has one of those.
The implications might be that you need to encrypt any data stored in the database, and that's one of the benefits of a relational database as most of them have either TDE (Transparent Data Encryption) or Encryption at Rest. Which means that your data is secured when in use and when not in use, respectively.
When it comes to the front-end you can look at Javascript frameworks such as Angular, Vue, React and then for your backend you can choose pretty much anything that you know well such as NodeJS or .NET Core or Go, pick your poison, each of them have their advantages and drawbacks so you will need to investigate your options before committing to one or the other.
It depends on your data structures. You can use MongoDB with dynamic schemas that eliminates the need for a predefined data structure. So you can use MongoDB when you have a dynamic dataset which is less relational. In the other hand, MongoDB is natively scalable. So you can store a large amount of data without much trouble.
Use a relational DB system when you have highly relational entities. SQL enables you to have complex transactions between entities with high reliability.
MongoDB/NoSQL
High write load
Unstable Schemas
Can handle big amount of data
High availability
SQL
Data structure fits for tables and row
Strict relationships among entities
Complex queries
Frequent updates in a large number of records

Neo4j instead of relational database

I am implementing a sinatra/rails based web portal that might eventually have few many:many relationships between tables/models. This is a one man team and part time but real world app.
I discussed my entity with someone and was advised to try neo4j. Coming from real 'non-sexy' enterprise world, my inclination is to use relational db until it stops scaling or becomes a nightmare because of sharding etc and then think about anything else.
HOWEVER,
I am using postgres for the first time in this project along with datamapper and its taking me time to get started very fast
I am just trying out few things and building more use cases so I consitently have to update my schema (prototyping idea and feedback from beta) . I wont have to do this in neo4j (except changing my queries)
Seems like its very easy to setup search using neo4j . But Postgres can do full text search as well.
Postgres recently announced support for json and javascript. Wondering if I should just stick with PG and invest more time learning PG (which has a good community) instead neo4j.
Looking for usecases where neo4j is better, especially at protyping/initial phase of a project. I understand if the website grows I might end up having multiple persistent technologies like s3, relational (PG), mongo etc.
Also it would be good to know how it plays out with Rails/Ruby ecosystem.
Update1:
I got a lot of good answers and seems like the right thing to do is stick with Postgres for now (especially since I deploy to heroku)
However the idea of being schema-less is tempting. Basically I am thinking of a approach where you don't define a datamodel until you have say 100-150 users and you have yourself figured out a good schema (business use cases) for your product , while you are just demoing the concept and getting feedback with limited signups. Then one can decide a schema and start with relational.
Would be nice to know if there are easy to use schema/less persistence option (based on ease to use/setup for new user) that might give up say scaling etc.
Graph databases should be considered if you have a really chaotic data model. They were needed to express highly complex relationships between entities. To do that, they store relationships at the data level whereas RDBMS use a declarative approach. Storing relationships only makes sense if these relationships are very different, otherwise you'll just end up duplicating data over and over, taking a lot of space for nothing.
To require such variety in relationships you'd have to handle huge amount of data. This is where graph databases shines because instand of doing tons of joins, they just pick a record and follow his relationships. To support my statement : you'll notice that every use cases on Neo4j's website are dealing with very complex data.
In brief, if you don't feel concerned with what I said above, I think you should use another technology. If this is just about scaling, schemalessness or starting fast a project, then look at other NoSQL solutions (more specifically, either column or document oriented databases). Otherwise you should stick with PostgreSQL. You could also, like you said, consider polyglot persistence,
About your update, you might consider hStore. I think it fits your requirements. It's a PostgreSQL module which also works on Heroku.
I don't think I agree that you should only use a graph database when your data model is very complex. I'm sure they could handle a simple data model/relationships as well.
If you have no prior experience with Neo4j or Postgres, then most likely both with take quite a bit of time to learn well.
Some things to keep in mind when picking:
It's not just about development against a database technology. You should consider deployment as well. How easy is it to deploy and scale Postgres/Neo4j?
Consider the community and tools around each technology. Is there a data mapper for Neo4j like there is for Postgres?
Consider that the data models are considerably different between the two. If you can already think relationally, then I'd probably stick with Postgres. If you go with Neo4j you're going to be making a lot of mistakes for several months with your data models.
Over time I've learned to keep it simple when I can. Postgres might be the boring choice compared to Neo4j, but boring doesn't keep you up at night. =)
Also I never see anyone mention it, but you should look at Riak (http://basho.com/riak/) too. It's a document database that also provides relationships (links) between objects. Not as mature as a graph database, but it can connect a few entities quickly.
The most appropriate choice depends on what problem you are trying to solve.
If you just have a few many to many tables, a relational database can be fine. In general, there is better OR-mapper support for relational databases, as they are much older and have a standardized interface and row-column structure. They also have been improved on for a long time, so they are stable and optimized for what they are doing.
A graph database is better if e.g. your problem is more about the connections between entities, especially if you need higher distance connections, like "detect cycles (of unspecified length)", some "what do friends-of-a-friend like". Things like that get unwieldy when restricted to SQL joins. A problem specific language like cypher in case of Neo4j makes that much more concise. On the downside, there are mappers between graph dbs and objects, but not for every framework and language under the sun.
I recently implemented a system prototype using neo4j and it was very useful to be able to talk about the structure and connections of our data and be able to model that one to one in the data storage. Also, adding other connections between data points was easy, neo4j being a schemaless storage. We ended up switching to mongodb due to troubles with write performance, but I don't think we could have finished the prototype with that in the same time.
Other NoSQL datastores like document based, column, key-value also cover specific usecases. Polyglot persistence is definitively something to look at, so keep your choice of backend reasonably separated from your business logic, to allow you to change your technology later if you learned something new.

MongoDB for Forex

I was wondering, can MongoDB be used for storing Forex data which would be later presented on client applications as real time data with analisys in form of graphs? I will have different sources with different feeds which can not be found from mainstream data providers.
Look at these papers coming from the MongoSF convention. Particularly about the analytics. Be aware that the data storage is only one aspect of - in this case - a very complex system design.
MongoDB can be used to store Forex data, the same all databases (that I can think of) will be able to. I think the big question is what do you want to get out of your data storage?
If you are after high performance, then NoSQL is certainly a good direction to go in, as they typically provide better speeds on large datasets when the table relationships get complex.
To be honest though, regardless of feeds - Forex data can be typically stored with a DateTime/High/Low/Open/Close/Currency/Interval right? I use SQL Server to do a very similar thing than what you described, and accessing the stored data is NOT the performance bottleneck. When you start trying to translate the data into the graphs and add indicators etc.. that's when the good design decisions pay off.
MongoDB can be used to store Forex data just like the rest of the data base systems. However, if one is after high performance NOSQL is a better option because it provides better speeds on large sets of data.
A little information on MongoDB and use with financial markets:
https://www.mongodb.com/blog/post/mongodb-single-platform-all-financial-data-ahl
Arctic is a great open source datastore solution which uses MongoDB and Python:
http://www.slideshare.net/JamesBlackburn1/2015-pydata-highperformance-iot-and-financial-data-storage-with-python-and-mongodb

NoSQL-agnostic persistence layer

It seems to me that, at the end of the day, most NoSQL databases are at their core key/value stores, which means one should be able to build a layer which could be NoSQL database agnostic.
That layer would only use CRUD operations (put, set, delete), but would expose more advanced features, and you'd be able to switch with minimal effort the underlying DB whether it's Mongo, Redis, Cassandra, etc.
Would building something like this have value to many people, and does it already exist?
Thanks
NuoDB is an elastically scalable SQL/ACID database that uses a Key/Value model for storage. It runs on top of Amazon S3 today (as well as standard file systems) and could support any KV store in principle. For the moment it's access method is SQL, but the system could readily support other data access languages and methods if that is a common requirement.
Barry Morris, NuoDB Inc.
There's kundera and DataNucleus
UnQL means Unstructured Query Language. It's an open query language for JSON, semi-structured and document databases.
It's next to impossible to build such thing.
As a thought experiment, I suggest that you take, for example, Redis, MongoDB and Cassandra, and design an API of such layer.
These NoSQL solutions have drastically different characteristics and they serve different purposes. Trying to build a common API for them is like building a common API for SQL database, spreadsheet document, plain text file and gmail.
While you can certainly come up with something, it will completely pointless.
Different needs call for different tools.
PlayOrm is another solution that is built on cassandra but has a pluggable interface for hbase, mongodb, etc. etc. 20/30 years ago they said the same thing about RDBMS, but more and more the featuresets converged. I suspect you will see alot of that in nosql database's as well as they adopt each other's feature sets.
currently, they have vastly different feature sets but at the core there is a set of operations that is very very similar.
PlayOrm actually builds it's query language which works on any noSQL provider as well, so it's S-SQL scalable SQL can work with cassandra, hadoop, etc. etc.
later,
Dean

Using Multiple Database Types to Model Data in a single application

Does it make sense to break up the data model of an application into different database systems? For example, the application stores all user data and relationships in a graph database (ideal for storing relationships), while storing other data in a document database, such as CouchDB or MongoDB? This would require the user graph database to reference unique ids in the document databases and vice versa.
Is this over complicating the data model and application? Or is this using the best uses of both types of database systems for scaling your application?
It definitely can make sense and depends fully on the requirements of your application. If you can use other database systems for things in which they are really good at.
Take for example full text search. Of course you can do more or less complex full text searches with a relational database like MySql. But there are systems like e.g. Lucene/Solr which are optimized for such things and can search fast in millions of documents. So you could use these systems for their special task (here: make a nifty full text search), then you return the identifiers and maybe load the relational structured data from the RDBMS.
Or CouchDB. I use couchDB in some projects as a caching systems. In combination with a relational database. Of course I need to care about consistency, but it it's definitely worth the effort. It pushed performance in the projects a lot and decreases for example load on the server from 2 to 0.2. :)
Something like this is for instance called cross-store persistence. As you mentioned you would store certain data in your relational database, social relationships in a graphdb, user-generated data (documents) in a document-db and user provided multimedia files (pictures, audio, video) in a blob-store like S3.
It is mainly about looking at the use-cases and making sure that from wherever you need it you might access the "primary" or index key of each store (back and forth). You can encapsulate the actual lookup in your domain or dao layer.
Some frameworks like the Spring Data projects provide some initial kind of cross-store persistence out of the box, mostly integrating JPA with a different NOSQL datastore. For instance Spring Data Graph allows it to store your entities in JPA and add social graphs or other highly interconnected data as a secondary concern and leverage a graphdb for the typical traversal and other graph operations (e.g. ranking, suggestions etc.)
Another term for this is polyglot persistence.
Here are two contrary positions on the question:
Pro:
"Contrary to that, I’m a big fan of polyglot persistence. This simply means using the right storage backend for each of your usecases. For example file storages, SQL, graph databases, data ware houses, in-memory databases, network caches, NoSQL. Today there are mostly two storages used, files and SQL databases. Both are not optimal for every usecase."
http://codemonkeyism.com/nosql-polyglott-persistence/
Con:
"I don’t think I need to say that I’m a proponent of polyglot persistence. And that I believe in Unix tools philosophy. But while adding more components to your system, you should realize that such a system complexity is “exploding” and so will operational costs grow too (nb: do you remember why Twitter started to into using Cassandra?) . Not to mention that the more components your system has the more attention and care must be invested figuring out critical aspects like overall system availability, latency, throughput, and consistency."
http://nosql.mypopescu.com/post/1529816758/why-redis-and-memcached-cassandra-lucene