Related
I like the idea of document databases, especially MongoDB. It allows for faster development as we don't have to adjust database schema's. However MongoDB doesn't support multi-document transactions and doesn't guarantee that modifications get written to disk immediately like normal databases (I know that you can make the time between flushes quite small, but it's still no guarantee).
Most of our projects are not that big that they need things like multi-server environments. So keeping that in mind. Are there any single server MongoDB-like document databases that support multi-document transactions and reliable flushing to disk?
It might be worthwhile to look at ArangoDB. It is a multi model database with a flexible data model for documents, graphs, and key-values. With respect to your specific requirements, ArangoDB database has full ACID transactions which can span over multiple documents in the same collection as well as over multiple collections (see Transactions in ArangoDB). That is, you can execute a group of manipulations to your documents together in a transaction and have guaranteed atomicity and isolation. If you additionally set waitForSync: true
(as described further down on said page), you get a guaranteed sync to disk before your transaction reports completion. Note that this happens automatically if your transaction spans multiple collections.
A very short answer to your specific (but brief) requirements:
Are there any single server MongoDB-like document databases that support multi-document transactions and reliable flushing to disk?
RavenDB [1] provides support for multi-doc transactions [2]. Unfortunately I don't know it handles durability.
CouchDB [3] provides durable writes, but no multi-doc transactions
RethinkDB [4] provides durable writes, but no multi-doc transactions.
So you might wonder what's different about these 3 solutions? Most of the time is their querying support (I'd say RethinkDB has the most advanced one covering pretty much all types of queries: sub-queries, JOINs, aggregations, etc.), their history (read: production readiness -- here I'd probably say CouchDB is in the lead), their distribution model (you mentioned that's not interesting for you), their licensing (RavenDB: commercial, CouchDB: Apache License, Rethinkdb: AGPL).
The next step would be for you to briefly look over their feature set and figure out which one comes close to your needs and give it a try.
I have some experience with CouchDB and ArangoDB which I can share:
You can run CouchDB with durability turned on (delayed_commits = false) so it will also sync your data to disk.
However, this is a global setting so it affects all writes. AFAIK you cannot set it on a per-collection level (the CouchDB term for "collection" would be "database").
Regarding multi-document operations: CouchDB has MVCC, so reading multiple documents from the same database provides a consistent result even in the face of parallel writers.
Writing multiple documents to the same database can also be made transactional for special cases, e.g. when using the bulk documents API.
But there is no way to execute cross-database operations in CouchDB. This is just not intended.
On ArangoDB: in ArangoDB you can turn on immediate syncing to disk on a per-collection level: you can turn it on for collections which you cannot tolerate any data loss in. You can turn immediate syncing off for not-so-important collections for performance reasons. It will then still sync modifications to disk frequently, but not immediately. It provides multi-document and multi-collection transactions.
Checkout the following:
arangodb
rethinkdb
I would suggest you look at Couchbase.
Couchbase can be run single server & you can add nodes later if you want.
Couchbase has memcached integrated so you have fast caching of common data, with a reliable method of writing updates to disk.
They also have a new query language (in development but you can use it now) called NQL ("Nickel") that gives you SQL like access, if that's important to you.
With cross-datacenter replication, you can keep two DBs on different machines or data centers in sync, which is good for having an offsite backup. This also allows you to add elastic search if you wish to have a full text search engine for those types of queries.
In short, Couchbase is a pretty complete solution, all open source and has intelligent (in my opinion) architecture for addressing the typical problems with distributed databases (e.g.: every document is "owned" by a given node, so all changes go to that node, and then the updates are replicated, this is better, I think, than say Riak where you can have updates go to two nodes and then have to be reconciled.)
You can use Couchbase on one node to run the database for many projects by separating the projects into different buckets.
there are so many nosql databases and definitely its hard to choose one. You will have to come up with proper requirements and know exactly what you want.
Following link compared almost all the popular nosql databases
http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
I hope this helps.
Berkeley DB is one we used. It supports ACID. It does have transactions, but as to your term "multidocument" applies, I'm not entirely sure. I imagine so long as each database (i.e. individual document) shares the same BDB environment (i.e. where transactions are stored) then maybe that gets what you want. BDB does have other tradeoffs though. With fully durability and high concurrency, commits are pretty slow.
Give a try to: http://www.orientdb.org/
"OrientDB has the flexibility of the Document databases and the power of the Graph databases to manage relationships. It can work in schema-less mode, schema-full or a mix of both. Supports advanced features such as ACID Transactions, Fast Indexes, Native and SQL queries. It imports and exports documents in JSON. OrientDB uses a new indexing algorithm called MVRB-Tree, derived from the Red-Black Tree and from the B+Tree with benefits of both: fast insertion and ultra fast lookup".
You do not have to adjust schemas in document data stores, but that does not mean you do not need some sort of schema as you probably want to do something meaningful with your data. It appears you would like an ACID database. If you have relational data, and you need transactions with that data, well it sounds very much like you need a relational database.
With "NoSQL" databases like Mongo, you are giving up ACID for features like many writable replicas, sharding, and quick accessing of document data. Sounds like you do not benefit from that so why take the tradeoff? A lot of people have been doing hybrid approaches lately with PostgreSQL by storing documents in a relational table as blobs of JSON. With this, you can have the advantage of storing your data as not strictly structured columns where it is not needed.
So if you have multiple documents that you need to be transactional on update, you can column out the keys, and have a column "document" or something where it is simply a blob of JSON where you serialize and deserialize it. This is not criticizing Mongo or other document stores as a database but it is just not really a good choice for transactional multidocument data. MarkLogic I believe does ACID over multiple documents too.
I think a lot of people find appeal with mongodb due to the schema-less-ness but I think in the end they get bit by trying to shoehorn a relational model into it. So as always the DB choice depends on how your data is.
If I were you I would take a close look at Solr. The underlying data-layer (Lucene) is by far the most mature of the NoSQL databases, and Solr makes installing, configuring, and integrating a single-host lucene store trivial.
In answer to your question, it supports user-delineated transactions. The read-optimised nature of Lucene can make it unsuitable for many applications, but most of those are well suited to Solr/Lucene+[SQL,Cassandra,CouchDB,RDF] depending on the requirements.
Personally I tend to start with Solr+SQL or Solr+RDF, but I know some people who love the whole NodeJS+CouchDB style, and I am convinced of the value of the flexibility that provides.
The bottom line is that there are enough NoSQL and SQL-extensions out there that care about data integrity to satisfy any requirement you have without you having to compromise you or your users' data.
Personally I believe you really need to check what your requirements are.
Due to the dynamics of how the OS of your server works it is complicated to say that everything "immediately" goes to disk even when you tell it to. certainly I know ACID techs like SQL are vulnerable to partial corruption through unfinished business and losing operations within a specific window when a single server goes down, unfortunately this is one of the problems of using a single server; you have no choice but to accept it.
I should note that a transaction does not ensure that your server will receive the entire data before failure ( http://en.wikipedia.org/wiki/Database_transaction ), I mean what if the server dies part way through a transaction?
You can perform a safe rollback based on constraints with transactions but few databases will provide the ability to continue playing the transaction unless they have already received all necessary data for it (which isn't normally the case), by which time the data might even be stale anyway.
In fact due to the weight of some transactions and the amount of queries performed within them I reckon you might get a greater window of operational loss using transactions than you might from the 60ms write to disk window on MongoDB at times. But of course that depends upon abuse, however, just like stored procedures, this abuse is common place.
Transactions shine on cascading deletes and typical scenarios like transferring money in a bank account, however, cascadable deletes are normally better done (as most sites do) by a cronjob with the application marking the row as deleted (to avoid the rollback of a transaction showing the deleted data back to the user again); this way you can do a lot of stuff to ensure consistency that you cannot in real-time do while the user is using your application.
So you should really question why you need a tech and what it will succeed in doing, atm the brevity of your question tells me your not sure about your requirements completely.
I am developing a JAVA based web application. The primary aim is to have inventory for products being sold on multiple websites called channels. We will act as manager for all these channels.
What we need is:
Queues to manage inventory updates for each channel.
Inventory table which has a correct snapshot of allocation on each channel.
Keeping Session Ids and other fast access data in a cache.
Providing a facebook like dashboard(XMPP) to keep the seller updated asap.
The solutions i am looking at are postgres(our db till now in a synchronous replication mode), NoSQL solutions like Cassandra, Redis, CouchDB and MongoDB.
My constraints are:
Inventory updates cannot be lost.
Job Queues should be executed in order and preferably never lost.
Easy/Fast development and future maintenance.
I am open to any suggestions. thanks in advance.
Queues to manage inventory updates for each channel.
This is not necessarily a database issue. You might be better off looking at a messaging system(e.g. RabbitMQ)
Inventory table which has a correct snapshot of allocation on each channel.
Keeping Session Ids and other fast access data in a cache.
session data should probably be put in a separate database more suitable for the task(e.g. memcached, redis, etc)
There is no one-size-fits-all DB
Providing a facebook like dashboard(XMPP) to keep the seller updated asap.
My constraints are:
1. Inventory updates cannot be lost.
There are 3 ways to answer this question:
This feature must be provided by your application. The database can guarantee that a bad record is rejected and rolled back, but not guarantee that every query will get entered.
The app will have to be smart enough to recognize when an error happens and try again.
some DBs store records in memory and then flush memory to disk peridocally, this could lead to data loss in the case of a power failure. (e.g Mongo works this way by default unless you enable journaling. CouchDB always appends to the records(even a delete is a flag appended to the record so data loss is extremely difficult))
Some DBs are designed to be extremely reliable, even if an earthquake, hurricane or other natural disaster strikes, they remain durable. these include Cassandra, Hbase, Riak, Hadoop, etc
Which type of durability are your referring to?
Job Queues should be executed in order and preferably never lost.
Most noSQL solutions prefer to run in parallel. so you have two options here.
1. use a DB that locks the entire table for every query(slower)
2. build your app to be smarter or evented(client side sequential queuing)
Easy/Fast development and future maintenance.
generally, you will find that SQL is faster to develop at first, but changes can be harder to implement
noSQL may require a little more planning, but is easier to do ad hoc queries or schema changes.
The questions you probably need to ask yourself are more like:
"Will I need to have intense queries or deep analysis that a Map/Reduce is better suited to?"
"will I need to my change my schema frequently?
"is my data highly relational? in what way?"
"does the vendor behind my chosen DB have enough experience to help me when I need it?"
"will I need special feature such as GeoSpatial indexing, full text search, etc?"
"how close to realtime will I need my data? will it hurt if I don't see the latest records show up in my queries until 1sec later? what level of latency is acceptable?"
"what do I really need in terms of fail-over"
"how big is my data? will it fit in memory? will it fit on one computer? is each individual record large or small?
"how often will my data change? is this an archive?"
If you are going to have multiple customers(channels?) each with their own inventory schemas, a document based DB might have it's advantages. I remember one time I looked at an ecommerce system with inventory and it had almost 235 tables!
Then again, if you have certain relational data, a SQL solution can really have some advantages too.
I can certainly see how I could build a solution using mongo, couch, riak or orientdb with the given constraints. But as for which is the best? I would try talking directly DB vendors, and maybe watch the nosql tapes
Addressing your constraints:
Most NoSQL solutions give you a configurable tradeoff of consistency vs. performance. In MongoDB, for instance, you can decide how durable a write should be. If you want to, you can force the write to be fsync'ed on all your replica set servers. On the other extreme, you can choose to send the command and don't even wait for the server's response.
Executing job queues in order seems to be an application code issue. I'd say a timestamp in the db and an order by type of query should do for most applications. If you have multiple application servers and your queues need to be perfect, you'd have to use a truly distributed algorithm that provides ordering, but that is not a typical requirement, and it's very tricky indeed.
We've been using MongoDB for some time now, and I'm convinced this gives your app development speed a real boost. There's no big difference in maintenance, maintaining data is a pain either way. Not having a schema gives you added flexibility (lazy migrations), but it's more elaborate and requires some care.
In summary, I'd say you can do it both ways. The NoSQL is more code driven, and transactions and relational integrity are mostly managed by your code. If you're uncomfortable with that, go for a relational DB.
However, if you're data grows huge, you'll have to code some of this logic manually because you probably wouldn't want to do real-time joins on a 10B row database. Still, you can implement that with SQL as well.
A good way to find the boundary for different databases is to consider what you can cache. Data that can be cached and reconstructed at any time are a great way to start introducing a new layer, because there's no big risks there. Also, cached data usually doesn't keep any relations so you're not sacrificing any consistency here.
NoSQL is not correct for this application.
I mean, you can use it sure, but you will end up re-implementing a lot of what SQL offers for you. For example I see a lot of relations there. You also want ACID (although some NoSQL solutions do offer that).
There is no reason you can't use both - keep relational data in relational databases, and non-relational data in key/value stores.
We're designing an OLTP financial system. it should be able to support 10.000 transactions per second and have reporting features.
So we have come to the idea of using:
a NoSQL DB as our main storage
a MySQL DB (Percona server actually) making some ETLs from the NoSQL DB for store reporting data
We're considering MongoDB and Riak for the NoSQL job. we have read that Riak scales more smoothly than MongoDB. And we would like to listen your opinion.
Which NoSQL DB would you use for a
OLTP financial system?
How has been
your experience scaling MongoDB/Riak?
There is no conceivable circumstance where I would use a NOSQl database for anything to do with finance. You simply don't have the data integrity needed or the internal controls. Dow Jones uses SQL Server to do its transactions and if they can properly design a high performance, high transaction Relational datbase so can you. You will have to invest in some people who know what they are doing though.
One has to think about the problem differently. The notion of transaction consistency stems from the UD (update) in CRUD (Create, Read, Update, Delete). noSQL DBs are CRAP (Create, Replicate, Append, Process) oriented, working by accretion of time-stamped data. With the right domain model, there is no reason that auditability and the equivalent of referential integrity can't be achieved.
The global-storage based NoSQL databases - Cache from InterSystems and GT.M from FIS - are used extensively in financial services and have been for many years. Cache in particular is used for both the core database and for OLTP.
I can answer regarding my experience with scaling Riak.
Riak scales smoothly to the extreme. Scaling is as easy as adding nodes to the cluster, which is a very simple operation in itself. You can achieve near linear scalability by simply adding nodes. Our experience with Riak as far as scaling is concerned has been amazing.
The flip side is that it is lacking in many respects. Some examples:
You can't do something like count(*) or list keys on a production cluster. That would require a work around if you want to do ETL from Riak into MySQL - or how would you know what to (E)xtract?
(One possible work around would be to maintain a bucket with a known key sequence that map to values that contain the keys you inserted into your other buckets).
The free version of Riak comes with no management console that lets you know what's going on, and the one that's included in the Enterprise version isn't much of an improvement.
You'll need the Enterprise version of you're looking to replicate your data over WAN (e.g. for DR / high availability). That's alright if you don't mind paying, but keep in mind that Basho pricing is very high.
I work with the Starcounter (so I’m biased), but I think I can safely say that for a system processing financial transactions you have to worry about transaction consistency. Unfortunately, this is what the engines used for Facebook and Twitter had to give up allow their scale-out strategy to offer performance. This is not because engines such as MongoDb or Cassandra are poorly designed; rather, it follows naturally from the CAP theorem (http://en.wikipedia.org/wiki/CAP_theorem). Simply put, changes you make in your database will overwrite other changes if they occur close in time. Ok for status updates and new tweets, but disastrous if you deal with money or other quantities. The amounts will simply end up wrong when many reads and writes are being done in parallel. So for the throughput you need, a memory centric NoSQL database with ACID support is probably the way to go.
You can use some NoSQL databases (Cassandra, EventStore) as a storage for financial service if you implement your app using event sourcing and concepts from DDD. I recommend you to read this minibook http://www.oreilly.com/programming/free/reactive-microservices-architecture.html
OLTP can be achieved using NoSQL with a custom implementation,
there are two things,
1. How are you going to achieve ACID properties that an RDBMS gives.
2. Provide a custom blocking or non blocking concurrency and transaction handling mechanism.
To take you closer to solution,
Apache Phoenix,apache trafodion or Splice machine.
Trafodion has full ACID support over HBase, you should take a look.
Cassandra can be used for both OLTP and OLAP. Good replication and eventual data consistency gives you the choice in your hand. Need to design the system properly. And after all it's free of cost but not free of developer, give it a try
I would like to test the NoSQL world. This is just curiosity, not an absolute need (yet).
I have read a few things about the differences between SQL and NoSQL databases. I'm convinced about the potential advantages, but I'm a little worried about cases where NoSQL is not applicable. If I understand NoSQL databases essentially miss ACID properties.
Can someone give an example of some real world operation (for example an e-commerce site, or a scientific application, or...) that an ACID relational database can handle but where a NoSQL database could fail miserably, either systematically with some kind of race condition or because of a power outage, etc ?
The perfect example will be something where there can't be any workaround without modifying the database engine. Examples where a NoSQL database just performs poorly will eventually be another question, but here I would like to see when theoretically we just can't use such technology.
Maybe finding such an example is database specific. If this is the case, let's take MongoDB to represent the NoSQL world.
Edit:
to clarify this question I don't want a debate about which kind of database is better for certain cases. I want to know if this technology can be an absolute dead-end in some cases because no matter how hard we try some kind of features that a SQL database provide cannot be implemented on top of nosql stores.
Since there are many nosql stores available I can accept to pick an existing nosql store as a support but what interest me most is the minimum subset of features a store should provide to be able to implement higher level features (like can transactions be implemented with a store that don't provide X...).
This question is a bit like asking what kind of program cannot be written in an imperative/functional language. Any Turing-complete language and express every program that can be solved by a Turing Maching. The question is do you as a programmer really want to write a accounting system for a fortune 500 company in non-portable machine instructions.
In the end, NoSQL can do anything SQL based engines can, the difference is you as a programmer may be responsible for logic in something Like Redis that MySQL gives you for free. SQL databases take a very conservative view of data integrity. The NoSQL movement relaxes those standards to gain better scalability, and to make tasks that are common to Web Applications easier.
MongoDB (my current preference) makes replication and sharding (horizontal scaling) easy, inserts very fast and drops the requirement for a strict scheme. In exchange users of MongoDB must code around slower queries when an index is not present, implement transactional logic in the app (perhaps with three phase commits), and we take a hit on storage efficiency.
CouchDB has similar trade-offs but also sacrifices ad-hoc queries for the ability to work with data off-line then sync with a server.
Redis and other key value stores require the programmer to write much of the index and join logic that is built in to SQL databases. In exchange an application can leverage domain knowledge about its data to make indexes and joins more efficient then the general solution the SQL would require. Redis also require all data to fit in RAM but in exchange gives performance on par with Memcache.
In the end you really can do everything MySQL or Postgres do with nothing more then the OS file system commands (after all that is how the people that wrote these database engines did it). It all comes down to what you want the data store to do for you and what you are willing to give up in return.
Good question. First a clarification. While the field of relational stores is held together by a rather solid foundation of principles, with each vendor choosing to add value in features or pricing, the non-relational (nosql) field is far more heterogeneous.
There are document stores (MongoDB, CouchDB) which are great for content management and similar situations where you have a flat set of variable attributes that you want to build around a topic. Take site-customization. Using a document store to manage custom attributes that define the way a user wants to see his/her page is well suited to the platform. Despite their marketing hype, these stores don't tend to scale into terabytes that well. It can be done, but it's not ideal. MongoDB has a lot of features found in relational databases, such as dynamic indexes (up to 40 per collection/table). CouchDB is built to be absolutely recoverable in the event of failure.
There are key/value stores (Cassandra, HBase...) that are great for highly-distributed storage. Cassandra for low-latency, HBase for higher-latency. The trick with these is that you have to define your query needs before you start putting data in. They're not efficient for dynamic queries against any attribute. For instance, if you are building a customer event logging service, you'd want to set your key on the customer's unique attribute. From there, you could push various log structures into your store and retrieve all logs by customer key on demand. It would be far more expensive, however, to try to go through the logs looking for log events where the type was "failure" unless you decided to make that your secondary key. One other thing: The last time I looked at Cassandra, you couldn't run regexp inside the M/R query. Means that, if you wanted to look for patterns in a field, you'd have to pull all instances of that field and then run it through a regexp to find the tuples you wanted.
Graph databases are very different from the two above. Relations between items(objects, tuples, elements) are fluid. They don't scale into terabytes, but that's not what they are designed for. They are great for asking questions like "hey, how many of my users lik the color green? Of those, how many live in California?" With a relational database, you would have a static structure. With a graph database (I'm oversimplifying, of course), you have attributes and objects. You connect them as makes sense, without schema enforcement.
I wouldn't put anything critical into a non-relational store. Commerce, for instance, where you want guarantees that a transaction is complete before delivering the product. You want guaranteed integrity (or at least the best chance of guaranteed integrity). If a user loses his/her site-customization settings, no big deal. If you lose a commerce transation, big deal. There may be some who disagree.
I also wouldn't put complex structures into any of the above non-relational stores. They don't do joins well at-scale. And, that's okay because it's not the way they're supposed to work. Where you might put an identity for address_type into a customer_address table in a relational system, you would want to embed the address_type information in a customer tuple stored in a document or key/value. Data efficiency is not the domain of the document or key/value store. The point is distribution and pure speed. The sacrifice is footprint.
There are other subtypes of the family of stores labeled as "nosql" that I haven't covered here. There are a ton (122 at last count) different projects focused on non-relational solutions to data problems of various types. Riak is yet another one that I keep hearing about and can't wait to try out.
And here's the trick. The big-dollar relational vendors have been watching and chances are, they're all building or planning to build their own non-relational solutions to tie in with their products. Over the next couple years, if not sooner, we'll see the movement mature, large companies buy up the best of breed and relational vendors start offering integrated solutions, for those that haven't already.
It's an extremely exciting time to work in the field of data management. You should try a few of these out. You can download Couch or Mongo and have them up and running in minutes. HBase is a bit harder.
In any case, I hope I've informed without confusing, that I have enlightened without significant bias or error.
RDBMSes are good at joins, NoSQL engines usually aren't.
NoSQL engines is good at distributed scalability, RDBMSes usually aren't.
RDBMSes are good at data validation coinstraints, NoSQL engines usually aren't.
NoSQL engines are good at flexible and schema-less approaches, RDBMSes usually aren't.
Both approaches can solve either set of problems; the difference is in efficiency.
Probably answer to your question is that mongodb can handle any task (and sql too). But in some cases better to choose mongodb, in others sql database. About advantages and disadvantages you can read here.
Also as #Dmitry said mongodb open door for easy horizontal and vertical scaling with replication & sharding.
RDBMS enforce strong consistency while most no-sql are eventual consistent. So at a given point in time when data is read from a no-sql DB it might not represent the most up-to-date copy of that data.
A common example is a bank transaction, when a user withdraw money, node A is updated with this event, if at the same time node B is queried for this user's balance, it can return an outdated balance. This can't happen in RDBMS as the consistency attribute guarantees that data is updated before it can be read.
RDBMs are really good for quickly aggregating sums, averages, etc. from tables. e.g. SELECT SUM(x) FROM y WHERE z. It's something that is surprisingly hard to do in most NoSQL databases, if you want an answer at once. Some NoSQL stores provide map/reduce as a way of solving the same thing, but it is not real time in the same way it is in the SQL world.
I've been looking at MongoDB and I'm fascinated. It appears (although I have to be suspicious) that in exchange for organizing my database in a slightly different way, I get as much performance as I have CPUs and RAM for free? It seems elegant, and flexible, but I'm not trading that for fast like I am with Rails. So what's the catch? What does a relational database give me that I can't do as well or at all with Mongo? In other words, why (other than immaturity of existing NoSQL systems and resistence to change) doesn't the entire industry jump ship from MySQL?
As I understood it, as you scale, you get MySQL to feed Memcache. Now it appears I can start with something equally performant from the beginning.
I know I can't do transactions across relationships... when would this be a big deal?
I read http://teddziuba.com/2010/03/i-cant-wait-for-nosql-to-die.html but as I understand it, his argument is basically that real businesses which use real tools don't need to avoid SQL, so people who feel a need to ditch it are doing it wrong. But no "enterprise" has to deal with nearly as many concurrent users as Facebook or Google, so I don't really see his point. (Walmart has 1.8 million employees; Facebook has 300 million users).
I'm genuinely curious about this... I promise I'm not trolling.
I am also a big fan of MongoDB. That having been said, it is absolutely not a wholesale replacement for RDBMS. Facebook has 300 million users but if some of your friends don't show up in the list one time, or one of the photo albums is missing on the occasional request, would you notice? Probably not. If your status update doesn't trickle down to all of your friends for a few minutes, does it matter? Hardly. If Wal-Mart's balance sheets are out of sync, would someone lose their head? Definitely.
NoSQL databases are great in "fuzzy" environments where relationships are not strict and data integrity can afford to be out of sync. RDBMS are still important when data sets are extremely complex and relational (hence the name), and they need to be kept pure.
The big push to NoSQL comes from the fact for the last 30 years, we have been using RDMBS systems for both scenarios. We now have a more appropriate tool for many situations. Some would argue most, in fact. But no one would argue all.
I write this but as a dispute to Rex's answer.
I dispute the idea that nosql is relationless and fuzzy.
I had been working with CODASYL many years ago with C and Cobol - entity relationships are very tight in CODASYL.
In contrast, relational database systems have a very liberal policy towards relationships. As long as you can identiy a foreign key, you could form a relationship adhoc.
It is frequently taken for granted that SQL is synonymous with RDBMS, but people have been writing SQL drivers for CODASYL, XML, inverted sets, etc.
RDBMS/SQL do not equal precision in data or relationship. In fact, RDBMS has been a constant cause in imprecision and misperception of relationships. I do not see how RDBMS offer better data and relationship integrity than hadoop, for example. Put on a layer of JDO - and we can construct a network of good and clean relationships between entities in hadoop.
However, I like working with SQL because it gives me the ability to script adhoc relationships, even though I realise that adhoc relationships is a constant cause of relationship adulteration and problems.
Having the opportunity to work with statistical analysis of business and industrial processes, SQL gave me the ability to explore relationships where no relationships had previously been perceived. The opportunity to work with statistical analysis gave me insights that would not normally come the way of SQL programmers.
For example, you would design and normalise your schema to reflect a set of processes. What you might not realise is that relationships change over time. The statistical characteristics would reveal that a schema may no longer be as "properly normalised" as it once had been. That the principal components of the processes have mutated over time. But non-statistical programmers do not understand that and continue to tout RDBMS as the perfect solution for data integrity and relationship precision.
However, in a relationship-linking database, you could link entities in relationships as they appear. When relationships mutate, the linking naturally mutate with the data. Relationships and their mutation are documented within the database system without the expensive need to renormalise the schema. At which point, RDBMS is good only as temp dbs.
But then you might counter that RDBMS too allows you to flexibly mutate your relationships, since that is what SQL does best. True, very true - so long as you perform BCNF or even 4NF. Otherwise, you would begin to see that your queries and data loaders performing replicated operations. But then your many years in the RDBMS business have so far certainly at least made you realise that BCNF is very expensive and operationally inefficient and that we are constantly guilty of 2.5 NFing our schemata.
To say that RDBMS and SQL promotes data and relationship integrity is a gross mis-statement. Either you work in a company that is so tiny or you didn't stay in your positions for more than two years - you would not see the amount of data or the information mutation and the problems caused by RDBMS. The abuse of RDBMS is the cause of executives being restricted in the view by computer applications and the cause of financial failures of companies failing to see changes in market behaviour because their views were restricted by the programmers whose views were restricted to their veneration of their beloved RDBMS schemata.
That is why SQL programmers do not understand why your company statistician refuses to use your application which you crafted meticulously but they employed a college intern to write SQL to download data into their personal servers and that your company executives learn to trust the accountants' and statisticians' spreadsheets rather than your elegant multi-tiered applications because of your applications' inability to mutate with processes.
It might not be possible, but I still urge you to acquire some statistical understanding to perceive how processes mutate over time so that you can make the right technological decision.
The reason people are not moving to SQL-less is lack of a good scripting environment like SQL to perform adhoc relationship analysis. Not because SQL-less technology is deficient in precision or integrity. Adhoc relationship analysis is very important nowadays due to the rapid and agile application development attitudes and strategies we have nowadays.
Let me hit the questions one at a time:
I know I can't do transactions across relationships... when would this be a big deal?
Picture cascading deletes. Or even just basic referential integrity. The concept of "foreign keys" can't really be enforced across "collections" (the Mongo term for tables). You can do atomic writes to only a single "document" (AKA record). So if you have a DB issue, you can orphan data in the DB.
I get as much performance as I have CPUs and RAM for free?
Not free, but definitely with a different set of trade-offs. For example, Mongo is great at running single-record, key/value look-ups. However, Mongo is poor at running relational queries. You'll need to use map-reduce for many of these. Mongo is a "RAM-whore". Mongo basically demands 64-bit for any significant dataset. Mongo will suck up drive space, load up a 140GB DB and you can end up using 200+ GB as the swap file grows during use.
And you're still going to want a fast drive.
In fact, I think it's safe to say the MongoDB is really a DB system that caters to leading-edge hardware (64-bit, lots of RAM, SSDs). I mean, the whole DB is centered around looking up data index data in RAM (hello 64-bit) and then doing focused random lookups on the drive (hello SSD).
why ... doesn't the entire industry jump ship from MySQL?
It's not ACID-compliant. Probably quite bad for the banking system (of course, most of them are still processing flat files, but that's a different issue). However, note that you can force "safe" writes with Mongo and guarantee that data gets to disk, but only one "document" at a time.
It's still very young. Lots of big business are still running old versions of Crystal Reports on their SQL Server 2000 app written in VB6. Or they're building enterprise service buses to manage the crazy heterogeneous environments they've built up over the years.
It's a very different paradigm. Maybe 30% of the questions I regularly see on Mongo mailing lists (and here) are fundamentally tied to "how do I do query X?" or "how do I structure this data?". Using MongoDB typically requires that you denormalize in advance. This is not only a little difficult, it's untrained. Most people only learn "normalization" in school, nobody teaches us how to denormalize for performance.
It's not the right tool for everything. Honestly I think that MongoDB is great tool for reading and writing transactional data. That simple "one-a-time" CRUD that comprises much of modern apps. However, MongoDB is not really great at reporting. In fact, I honestly envision that the next step is not "Mongo for everything" it's "Mongo for transactional" and "MySQL for reporting". When your data gets big enough that you throw out "real-time reporting", then using Map-Reduce to populate a reporting DB doesn't seem that bad.
As I understood it, as you scale, you get MySQL to feed Memcache. Now it appears I can start with something equally performant from the beginning.
Honestly, I'm working towards this on a few of my projects. Again, I think that MongoDB actually does make a valid caching layer. In fact, it makes a file-backed caching layer. So if you're capable of pushing MySQL change to Mongo, then you're getting getting Memcached without cache misses. It also makes it easy to "warm the cache" on new server, just copy files and start Mongo pointing at the correct folder, it really is that easy.
How often do you think Facebook does arbitrary queries against its datastore(s)? Not everything is a web app, and conversely not every set of data needs to be analyzed deeply.
NoSQL in my opinion, is largely a reactionary response to what basically amounted to people using RDBMS for tasks they were not well suited because people didn't actively make a decision based on their needs and chose some default. To "jump ship from MySQL" (or RDBMSs in general) industry-wide would be to make the same mistake all over again and the pendulum will end up swinging back the other way.
If MongoDB works for your use case, by all means go ahead. Just don't assume your use case is all use cases. There is no technology that fits all scenarios. The invention of the supersonic jets didn't eliminate the use of freight trains.
The big backlash against NoSQL is rooted in the mentality of many of the NoSQL advocates. Specifically, the attitude best summarized as "SQL is too hard, I shouldn't have to do it". I dislike NoSQL because it seems in many cases to be elevating ignorance.
I know I can't do transactions across relationships... when would this be a big deal?
More often than you might expect. There are a lot of things that can go wrong when you can't assume a consistent dataset.
I have used MongoDB, Redis (more than key-value pair supports list, set and sorted set), Tokyo Tyrant, Memcached and MySql & PostgreSQL.
The arguments between NoSQL DB And SQL based DB are completely baseless. You need to choose the appropriate model based on your use case.. If you need ACID compliances, go ahead with SQL DB like PostgreSQL, Oracle etc. You need high performance, but you less care about data, then you may consider noSQL DB. They are fundamentally different technologies. You can even use the combination of models. With NoSQL, you will be missing relationships, constraints and sometimes transaction.. In fact, thats is the one of the reason NoSQL are faster..
Once I have lost two months of aggregate data with MongoDB.. No clue how I lost them..But I had backup and I have lost few minutes of data. I brought back MongoDB with backup.. If you use NoSQL, take occasional backup or schedule cron jobs for DB backup. This is applicable for SQL DB also.
Compared to SQL RDBMS, NoSQL DBs are younger and they are currently under full fledged development but NoSQL DBs are matured in their scope ie they meant for high performance, easy replication.
In my website(stacked.in), I have used only redis DB, it works much much faster than MySQL.
Remember, NoSQL isn't exactly new. After all, they had to use something before SQL and relational databases, right? In fact, systems like MUMPS and CODASYL work the same way and are decades old. What relational databases give you is the ability to query data in arbitrary ways.
Say you have a database with customers, their purchases, and what items they purchased. A NoSQL DB might have customers containing purchases and purchases containing items. This makes it easy to find out what items a given customer purchased, but hard to find out what customers purchased a given item. A relational DB would have tables for customers, purchases, items, and a table linking items to purchases. In SQL, both queries are trivial to formulate, and the database engine does all the hard work for you.
Also, keep in mind that part of the NoSQL trend is to sacrifice consistency or reliability for speed, scalability, and cost. Relational DBs can scale, but it's not cheap. If you go to http://tpc.org you can find RDBMSes that run on hundreds of cores simultaneously to deliver millions of transactions per minute, but they cost millions of dollars.
If your data does not take advantage of relational algebra, nor do you need ACID guarantees, then you don't gain anything by using languages that cater exclusively for those uses.