Implementing warehousing system using rethinkdb: are there any strictly ACID requirements I need? - postgresql

Situation
I am trying to implement a warehouse system using a traditional database.
The tables are :
products (each row representing 1 sku)
warehouse_locations ( each row represents a particular shelf in a particular warehouse)
pallets (each row represents a particular pallet)
user_defined_categories (each row represents a particular user defined category: e.g. reserved, available, total_physical, etc)
products_in_pallets_by_categories (each row will have foreign keys of the pallets, products, and user_defined_categories table. will specify quantity of products in a particular pallet of a particular category.)
products_in_warehouse_locations_by_categories (each row will have foreign keys of the warehouse_locations, products table, and user_defined_categories. will specify quantity of products in a particular pallet of a particular category.)
What end users want to see/do
End users will update the system about what products are placed/removed on what pallet.
End users will also want to know any time (preferably in real-time) how many reserved or available products are in the warehouse.
So what's my initial plan?
Wanted to use a traditional RDBMS like PostgresQL and a message queue like RabbitMQ to provide real-time updates. By real-time updates, I mean the end users using either a single page application or mobile phone can observe changes in inventory in real-time.
So what's changed?
I came across rethinkdb FAQ and it said
RethinkDB is not a good choice if you need full ACID support or strong
schema enforcement—in this case you are better off using a relational
database such as MySQL or PostgreSQL.
Why you even considering rethinkdb?
Because if I can use it and it allows real-time updates, it will help tremendously as we expect the client's sales team placing reservations around the world on our system.
What's the most frequent updates/inserts?
The movement of the products from one place to another. I expect plenty of updates/inserts/deletes to the relation tables. Occasionally, I apologise I do not know how to explain this in the rethinkdb paradigm. I am a traditional RDBMS person.
Is the system built yet?
Not yet. Which is why I want to seek an answer regarding rethinkdb before actually proceeding.
Do you expect to use any transactions?
Well, I am not sure.
I can think of a real world case where a warehouse worker moves products (partially or completely) from one pallet to another pallet.
Another real world case will be where a warehouse worker moves the products from a pallet to a warehouse_location (or vice-versa).
Do I definitely need to use transactions? Again, I am not sure.
Cause I expect the workers to update the system AFTER they have physically finished the moving.
I will provide a screen for them to choose
move from <some dropdown> to <another dropdown>
So what's the question?
Do I need to have full ACID support or strong schema enforcement for my warehouse system based on my user requirements at the moment? And is it implementable using rethinkdb?
I also expect to implement activity streams once the system is implemented which will show events such as Worker A moved 100 units of product A from warehouse shelf 1A to pallet 25.

When you are dealing with things where information must always be accurate and consistent, ACID matters. From what you say, it sounds like it is important.
It sounds to me like you want to just allow real-time updates and that the key problem is in seeing rabbit-mq as the non-real-time component, correct? Why were you even considering RabbitMQ? (If it is to allow the db to go down for maintenance, maybe implement a backup private caching store in sqlite?)
In general you should assume you need ACID compliance until you have a model where eventual consistency is ok. Additionally real-time accurate reporting rules out eventual consistency.

Related

How to implement an optimistic (or pessimistic) locking when using two databases that need to be in sync?

I'm working on a solution in which we have two databases that are used with the following purposes:
An Elasticsearch used for search purposes
A Postgres database that acts as a source of truth for the data
Our application allows users to retrieve and update products, and a product has multiple attributes: name, price, description, etc... And two typical use cases are:
Retrieve products by name: a search is performed using elasticsearch, and then the IDs retrieved by ES are used on a secondary query against Postgres to obtain the actual and trustworthy data (so we get fast searches on big tables while getting trustworthy data)
Update product fields: We allow users to update any product information (kind of a collaborative wiki). First we store the data in Postgres, and then into Elasticsearch.
However and as I feared and as the amount of people using the app increased, we've run into race conditions; if a user #1 changed the name of a product to "Banana" and then user #2 at the same time changed the name of a product to "Apple", sometimes in elasticsearch the last record saved would be "Banana" while in Postgres "Apple" would be the last value, creating a serious inconsistency between databases.
So I've ventured into reading about optimistic/pessimistic locking in order to solve my problem, but so far all articles I find relate when you only use 1 relational database, and the solutions offered rely on ORM implementations (e.g. Hibernate). But our combined storage solution of ES + Postgres requires more "ballet" than that.
What techniques/options are available to me to solve my kind of problem?
Well I may attract some critics but let me explain you in a way that I understand. What I understand is this problem/concern is more of an architectural perspective rather than design/code perspective.
Immediate Consistency and of course eventual consistency
From Application layer
For the immediate consitency between two databases, the only way you can achieve them is to do polygot persistence in a transactional way so that either the same data in both Postgres and Elasticearch gets updated or none of them would. I wouldn't recommend this purely because it would put a lot of pressure on the application and you would find it very difficult to scale/maintain.
So basically GUI --> Application Layer --> Postgres/Elasticsearch
Queue/Real Time Streaming mechanism
You need to have a messaging queue so that the updates are going to the Queue in the event based approached.
GUI --> Application Layer --> Postgres--> Queue --> Elasticsearch
Eventual consistency but not immediate consistency
Have a separate application, normally let's call this indexer. The purpose of this tool is to carry out updates from postgres and push them into the Elasticsearch.
What you can have in the indexer is have multiple single configuration per source which would have
An option to do select * and index everything into Elasticsearch or full crawl
This would be utlized when you want to delete/reindex entire data into Elasticsearch
Ability to detect only the updated rows in the Postgres and thereby push them into Elasticsearch or incremental crawl
For this you would need to have a select query with where clause based on the status on your postgres rows for e.g. pull records with status 0 for document which got recently updated, or based on timestamp to pull records which got updated in last 30 secs/1 min or depending on your needs. Incremental query
Once you perform the incremental crawl, if you implement incremental using status, you need to change the status of this to 1(success) or '-1'(failure) so that in the next crawl the same document doesn't get picked up. Post-incremental query
Basically schedule jobs to run above queries as part of indexing operations.
Basically we would have GUI --> Application Layer --> Postgres --> Indexer --> Elasticsearch
Summary
I do not think it would be wise to think of fail proof way rather we should have a system that can recover in quickest possible time when it comes to providing consistency between two different data sources.
Having the systems decoupled would help greatly in scaling and figuring out issues related to data correctness/quality and at the same time would help you deal with frequent updates as well as growth rate of the data and updates along with it.
Also I recommend one more link that can help
Hope it helps!

SaaS system with dynamic data model in production

I want to design a product which allows customers to create their own websites. A customer will be able to maintain his website's data model on the fly, do queries on it and display the output on a html page. I doubt a traditional RDMBS is the right choice for two reasons; with every customer the amount of data will grow and the RDBMS might reach its limits even if scaled. As the data model is highly dynamic doing many DDL queries will slow down the whole system.
I'm trying to figure out which database/datastorage system might be the best option for such a system. Recently I read a lot through NoSQL solutions like Cassandra and MongoDB and it looks promising in terms of performance but comes with a flaw: it's not relational data so data have to be denormalized.
I don't know what will be the impact of denormalizing a dynamic customer defined data model, because the customer models and inserts data first (in a relational way) and then does the queries afterwards. The denormalization has to happen automatically which leads to another problem: Can I create one table for each query, even if some queries might be similar? There might exist a high redundancy of data after a while.
Does creating/updating tables on the fly have any impact?
Every time the customer changes data the same data has to be changed in all tables which hold a copy of the same entity (like the name of an employee has to be changed in "team member" and also in "project task"). Are those updates costly?
Is it possible to nest data with unlimited depth like {"team": {"members": [{"name": "Ben"}]}}?
There might be even better/other approaches, I'm happy for any hints.
Adding clarification to the requirements
My question actually is, how can I use a NoSQL DB like Cassandra to maintain relational data and will the solution still perform better compared to a RDBMS?
The customer thinks relational (because in fact, data are always relational in my opinion) no matter what DBMS is used. And this service is not about letting the customer chose the underlying data storage. There can only be one.
A customer can define his own relational data model by using a management frontend provided by the application. The data model may be changed at any time by the customer. In RDBMS a DDL on a production system is not a good idea. On top of the data schema the customer can add named queries and use them as a data source on any web page he creates.
An example would be a query for News given the name "news" and in a web page it would be used like <ul><li query="news"><h1>[news.title]</h1></li></ul>, which would execute the query and iterate through the data and repeat the <li> on each iteration. That is the most simple example though.
In more complex examples if using SQL there might be extensive use of sub queries which performs bad. In NoSQL it seems there is the option to first denormalize and prepare a table with the data needed by the query and then just query that table. Any changes to involved data would lead to an update for that table. That means for every query the customer creates the system will automatically create and maintain a table and its data, so there will be a lot of data redundancy. Benchmarks state that Cassandra is fast in writing so that might be an option.
Let me put my 2 cents in.
Talking about of ability for users having own data models is not about SaaS.
In the pure SaaS paradigm, each user has the same functionality and data model. He could add his own objects, but not the classes of objects.
So scaling in this paradigm is a rather obvious (though frankly, it could be not so trivial) solution. You can get cloud DB with built-in multi-tenant support (like Azure, for example), you can use Amazon's RDS and add more instances as the user amount growth, you can use sharding (for instance, a partition by users) if the database supports it, etc.
But when we're talking about custom data model for each user is more like IaaS (infrastructure). It is some more low-level thing and you just say: "Ok, guys, you may build any data model you want, whatever".
And I believe that if you move the responsibility for the data model creation to the user, you should also move the responsibility for database selection, as IaaS provides. So the user would say:" "Ok, I need key-value database here" and you provide him Cassandra's table for example. If he wants RDBMS, you provide him one also.
Otherwise, you have to consider not the data model itself, but also the data strategy that your customer needs. So some customer may need to have key-value storage (that needs to be backed by some noSQL DB), the other may need RDBMS. How would you know it?
For instance, consider the entity from your example: {"team": {"members": [{"name": "Ben"}]}}. One user would use this model for the single type of queries something like "get the members for the team" and "add the member for the team". Another one user may need to query frequently for some stats information (average team player age, games played). And these two scenarios could demand different database types: first is for key-value search, the other is RDBMS. How would you guess the database type and structure as key-value storages are modeled around queries?
Technically, you may even try to guess the database type depending on the users' data model and queries, but you need to add some restrictions for users' creativity. Otherwise, it would be very untrivial task.
And about scaling, as each model is unique, you need to have add database instances as users grow. Of course, you can have multiple users in the single database instance in the different schemas, and you will need to determine the users' amount per instance by experiments or performance testing.
You may also look at the document-oriented databases, but I think that you need review your concept and make some changes. Maybe you have some obvious restrictions yet, but I just didn't get it from your post.

Commit to a log like Kafka + database with ACID properties?

I'm planning in test how make this kind of architecture to work:
http://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/
Where all the data is stored as facts in a log, but the validations when posted a change must be against a table. For example, If I send a "Create Invoice with Customer 1" I will need to validate if the customer exist and other stuff, then when the validation pass commit to the log and put the current change to the table, so the table have the most up-to-date information yet I have all the history of the changes.
I could put the logs into the database in a table (I use PostgreSql). However I'm concerned about the scalability of doing that, also, I wish to suscribe to the event stream from multiple clients and PG neither other RDBMS I know let me to do this without polling.
But if I use Kafka I worry about the ACID between both storages, so Kafka could get wrong data that PG rollback or something similar.
So:
1- Is possible to keep consistency between a RDBMS and a log storage OR
2- Is possible to suscribe in real time and tune PG (or other RDBMS) for fast event storage?
Easy(1) answers for provided questions:
Setting up your transaction isolation level properly may be enough to achieve consistency and not worry about DB rollbacks. You still can occasionally create inconsistency, unless you set isolation level to 'serializable'. Even then, you're guaranteed to be consistent, but still could have undesirable behaviors. For example, client creates a customer and puts an invoice in a rapid succession using an async API, and invoice event hits your backed system first. In this case invoice event would be invalidated and a client will need to retry hoping that customer was created by that time. Easy to avoid if you control clients and mandate them to use sync API.
Whether it is possible to store events in a relational DB depends on your anticipated dataset size, hardware and access patterns. I'm a big time Postgres fan and there is a lot you can do to make event lookups blazingly fast. My rule of thumb -- if your operating table size is below 2300-300GB and you have a decent server, Postgres is a way to go. With event sourcing there are typically no joins and a common access pattern is to get all events by id (optionally restricted by time stamp). Postgres excels at this kind of queries, provided you index smartly. However, event subscribers will need to pull this data, so may not be good if you have thousands of subscribers, which is rarely the case in practice.
"Conceptually correct" answer:
If you still want to pursue streaming approach and fundamentally resolve race conditions then you have to provide event ordering guarantees across all events in the system. For example, you need to be able to order 'add customer 1' event and 'create invoice for customer 1' event so that you can guarantee consistency at any time. This is a really hard problem to solve in general for a distributed system (see e.g. vector clocks). You can mitigate it with some clever tricks that would work for your particular case, e.g. in the example above you can partition your events by 'customerId' early as they hit backend, then you can have a guarantee that all event related to the same customer will be processed (roughly) in order they were created.
Would be happy to clarify my points if needed.
(1) Easy vs simple: mandatory link

Is a document/NoSQL database a good candidate for storing a balance sheet?

If I were to create a basic personal accounting system (because I'm like that - it's a hobby project about a domain I'm familiar enough with to avoid getting bogged-down in requirements), would a NoSQL/document database like RavenDB be a good candidate for storing the accounts and more importantly, transactions against those accounts? How do I choose which entity is the "document"?
I suspect this is one of those cases were actually a SQL database is the right fit and trying to go NoSQL is the mistake, but then when I think of what little I know of CQRS and event sourcing, I wonder if the entity/document is actually the Account, and the transactions are Events stored against it, and that when these "events" occur, maybe my application also then writes out to a easily queryable read store like a SQL database.
Many thanks in advance.
Personally think it is a good idea, but I am a little biased because my full time job is building an accounting system which is based on CQRS, Event Sourcing, and a document database.
Here is why:
Event Sourcing and Accounting are based on the same principle. You don't delete anything, you only modify. If you add a transaction that is wrong, you don't delete it. You create an offset transaction. Same thing with events, you don't delete them, you just create an event that cancels out the first one. This means you are publishing a lot of TransactionAddedEvent.
Next, if you are doing double entry accounting, recording a transaction is different than the way your view it on a screen (especially in a balance sheet). Hence, my liking for cqrs again. We can store the data using correct accounting principles but our read model can be optimized to show the data the way you want to view it.
In a balance sheet, you want to view all entries for a given account. You don't want to see the transaction because the transaction has two sides. You only want to see the entry that affects that account.
So in your document db you would have an entries collection.
This makes querying very easy. If you want to see all of the entries for an account you just say SELECT * FROM Entries WHERE AccountId = 1. I know that is SQL but everyone understands the simplicity of this query. It just as easy in a document db. Plus, it will be lightning fast.
You can then create a balance sheet with a query grouping by accountid, and setting a restriction on the date. Notice no joins are needed at all, which makes a document db a great choice.
Theory and Architecture
If you dig around in accounting theory and history a while, you'll see that the "documents" ought to be the source documents -- purchase order, invoice, check, and so on. Accounting records are standardized summaries of those usually-human-readable source documents. An accounting transaction is two or more records that hit two or more accounts, tied together, with debits and credits balancing. Account balances, reports like a balance sheet or P&L, and so on are just summaries of those transactions.
Think of it as a layered architecture -- the bottom layer, the foundation, is the source documents. If the source is electronic, then it goes into the accounting system's document storage layer -- this is where a nosql db might be useful. If the source is a piece of paper, then image it and/or file it with an index number that is then stored in the accounting system's document layer. The next layer up is digital records summarizing those documents; each document is summarized by one or more unbalanced transaction legs. The next layer up is balanced transactions; each transaction is composed of two or more of those unbalanced legs. The top layer is the financial statements that summarize those balanced transactions.
Source Documents and External Applications
The source documents are the "single source of truth" -- not the records that describe them. You should always be able to rebuild the entire db from the source documents. In a way, the db is just an index into the source documents in the first place. Way too many people forget this, and write accounting software in which the transactions themselves are considered the source of truth. This causes a need for a whole 'nother storage and workflow system for the source documents themselves, and you wind up with a typical modern corporate mess.
This all implies that any applications that write to the accounting system should only create source documents, adding them to that bottom layer. In practice though, this gets bypassed all the time, with applications directly creating transactions. This means that the source document, rather than being in the accounting system, is now way over there in the application that created the transaction; that is fragile.
Events, Workflow, and Digitizing
If you're working with some sort of event model, then the right place to use an event is to attach a source document to it. The event then triggers that document getting parsed into the right accounting records. That parsing can be done programatically if the source document is already digital, or manually if the source is a piece of paper or an unformatted message -- sounds like the beginnings of a workflow system, right? You still want to keep that original source document around somewhere though. A document db does seem like a good idea for that, particularly if it supports a schema where you can tie the source documents to their resulting parsed and balanced records and vice versa.
You can certainly create such a system.
In that scenario, you have the Account Aggregate, and you also have the TimePeriod Aggregate.
The time period is usually a Month, a Quarter or a Year.
Inside each TimePeriod, you have the Transactions for that period.
That means that loading the current state is very fast, and you have the full log in which you can go backward.
The reason for TimePeriod is that this is usually the boundary in which you actually think about such things.
In this case, a relational database is the most appropriate, since you have relational data (eg. rows and columns)
Since this is just a personal system, you are highly unlikely to have any scale or performance issues.
That being said, it would be an interesting exercise for personal growth and learning to use a document-based DB like RavenDB. Traditionally, finance has always been a very formal thing, and relational databases are typically considered more formal and rigorous than document databases. But, like you said, the domain for this application is under your control, and is fairly straight forward, so complexity and requirements would not get in the way of designing the system.
If it was my own personal pet project, and I wanted to learn more about a new-ish technology and see if it worked in a particular domain, I would go with whatever I found interesting and if it didn't work very well, then I learned something. But, your mileage may vary. :)

Is NoSQL 100% ACID 100% of the time?

Quoting: http://gigaom.com/cloud/facebook-trapped-in-mysql-fate-worse-than-death/
There have been various attempts to
overcome SQL’s performance and
scalability problems, including the
buzzworthy NoSQL movement that burst
onto the scene a couple of years ago.
However, it was quickly discovered
that while NoSQL might be faster and
scale better, it did so at the expense
of ACID consistency.
Wait - am I reading that wrongly?
Does it mean that if I use NoSQL, we can expect transactions to be corrupted (albeit I daresay at a very low percentage)?
It's actually true and yet also a bit false. It's not about corruption it's about seeing something different during a (limited) period.
The real thing here is the CAP theorem which simply states you can only choose two of the following three:
Consistency (all nodes see the same data at the same time)
Availability (a guarantee that every request receives a response about whether it was successful or failed)
Partition
tolerance (the system continues to operate despite arbitrary message loss)
The traditional SQL systems choose to drop "Partition tolerance" where many (not all) of the NoSQL systems choose to drop "Consistency".
More precise: They drop "Strong Consistency" and select a more relaxed Consistency model like "Eventual Consistency".
So the data will be consistent when viewed from various perspectives, just not right away.
NoSQL solutions are usually designed to overcome SQL's scale limitations. Those scale limitations are explained by the CAP theorem. Understanding CAP is key to understanding why NoSQL systems tend to drop support for ACID.
So let me explain CAP in purely intuitive terms. First, what C, A and P mean:
Consistency: From the standpoint of an external observer, each "transaction" either fully completed or is fully rolled back. For example, when making an amazon purchase the purchase confirmation, order status update, inventory reduction etc should all appear 'in sync' regardless of the internal partitioning into sub-systems
Availability: 100% of requests are completed successfully.
Partition Tolerance: Any given request can be completed even if a subset of nodes in the system are unavailable.
What do these imply from a system design standpoint? what is the tension which CAP defines?
To achieve P, we needs replicas. Lots of em! The more replicas we keep, the better the chances are that any piece of data we need will be available even if some nodes are offline. For absolute "P" we should replicate every single data item to every node in the system. (Obviously in real life we compromise on 2, 3, etc)
To achieve A, we need no single point of failure. That means that "primary/secondary" or "master/slave" replication configurations go out the window since the master/primary is a single point of failure. We need to go with multiple master configurations. To achieve absolute "A", any single replica must be able to handle reads and writes independently of the other replicas. (in reality we compromise on async, queue based, quorums, etc)
To achieve C, we need a "single version of truth" in the system. Meaning that if I write to node A and then immediately read back from node B, node B should return the up-to-date value. Obviously this can't happen in a truly distributed multi-master system.
So, what is the "correct" solution to the problem? It details really depend on your requirements, but the general approach is to loosen up some of the constraints, and to compromise on the others.
For example, to achieve a "full write consistency" guarantee in a system with n replicas, the # of reads + the # of writes must be greater or equal to n : r + w >= n. This is easy to explain with an example: if I store each item on 3 replicas, then I have a few options to guarantee consistency:
A) I can write the item to all 3 replicas and then read from any one of the 3 and be confident I'm getting the latest version B) I can write item to one of the replicas, and then read all 3 replicas and choose the last of the 3 results C) I can write to 2 out of the 3 replicas, and read from 2 out of the 3 replicas, and I am guaranteed that I'll have the latest version on one of them.
Of course, the rule above assumes that no nodes have gone down in the meantime. To ensure P + C you will need to be even more paranoid...
There are also a near-infinite number of 'implementation' hacks - for example the storage layer might fail the call if it can't write to a minimal quorum, but might continue to propagate the updates to additional nodes even after returning success. Or, it might loosen the semantic guarantees and push the responsibility of merging versioning conflicts up to the business layer (this is what Amazon's Dynamo did).
Different subsets of data can have different guarantees (ie single point of failure might be OK for critical data, or it might be OK to block on your write request until the minimal # of write replicas have successfully written the new version)
The patterns for solving the 90% case already exist, but each NoSQL solution applies them in different configurations. The patterns are things like partitioning (stable/hash-based or variable/lookup-based), redundancy and replication, in memory-caches, distributed algorithms such as map/reduce.
When you drill down into those patterns, the underlying algorithms are also fairly universal: version vectors, merckle trees, DHTs, gossip protocols, etc.
It does not mean that transactions will be corrupted. In fact, many NoSQL systems do not use transactions at all! Some NoSQL systems may sometimes lose records (e.g. MongoDB when you do "fire and forget" inserts rather than "safe" ones), but often this is a design choice, not something you're stuck with.
If you need true transactional semantics (perhaps you are building a bank accounting application), use a database that supports them.
First, asking if NoSql is 100% ACID 100% of the time is a bit of a meaningless question. It's like asking "Are dogs 100% protective 100% of the time?" There are some dogs that are protective (or can be trained to be) such as German Shepherds or Doberman Pincers. There are other dogs that could care less about protecting anyone.
NoSql is the label of a movement, and not a specific technology. There are several different types of NoSql databases. There are document stores, such as MongoDb. There are graph databases such as Neo4j. There are key-value stores such as cassandra.
Each of these serve a different purpose. I've worked with a proprietary database that could be classified as a NoSql database, it's not 100% ACID, but it doesn't need to be. It's a write once, read many database. I think it gets built once a quarter (or once a month?) and then is read 1000s of time a day.
There is a lot of different NoSQL store types and implementations. Every of them can solve trade-offs between consistency and performance differently. The best you can get is a tunable framework.
Also the sentence "it was quickly discovered" from you citation is plainly stupid, this is no surprising discovery but a proven fact with deep theoretical roots.
In general, it's not that any given update would fail to save or get corrupted -- these are obviously going to be a very big issue for any database.
Where they fail on ACID is in data retrieval.
Consider a NoSQL DB which is replicated across numerous servers to allow high-speed access for a busy site.
And lets say the site owners update an article on the site with some new information.
In a typical NoSQL database in this scenario, the update would immediately only affect one of the nodes. Any queries made to the site on the other nodes would not reflect the change right away. In fact, as the data is replicated across the site, different users may be given different content despite querying at the same time. The data could take some time to propagate across all the nodes.
Conversely, in a transactional ACID compliant SQL database, the DB would have to be sure that all nodes had completed the update before any of them could be allowed to serve the new data.
This allows the site to retain high performance and page caching by sacrificing the guarantee that any given page will be absolutely up to date at an given moment.
In fact, if you consider it like this, the DNS system can be considered to be a specialised NoSQL database. If a domain name is updated in DNS, it can take several days for the new data to propagate throughout the internet (depending on TTL configuration).
All this makes NoSQL a useful tool for data such as web site content, where it doesn't necessarily matter that a page isn't instantly up-to-date and consistent as long as it is reasonably up-to-date.
On the other hand, though, it does mean that it would be a very bad idea to use a NoSQL database for a system which does require consistency and up-to-date accuracy. An order processing system or a banking system would definitely not be a good place for your typical NoSQL database engine.
NOSQL is not about corrupted data. It is about viewing at your data from a different perspective. It provides some interesting leverage points, which enable for much easier scalability story, and often usability too. However, you have to look at your data differently, and program your application accordingly (eg, embrace consequences of BASE instead of ACID). Most NOSQL solutions prevent you from making decisions which could make your database hard to scale.
NOSQL is not for everything, but ACID is not the most important factor from end-user perspective. It is just us developers who cannot imagine world without ACID guarantees.
You are reading that correctly. If you have the AP of CAP, your data will be inconsistent. The more users, the more inconsistent. As having many users is the main reason why you scale, don't expect the inconsistencies to be rare. You've already seen data pop in and out of Facebook. Imagine what that would do to Amazon.com stock inventory figures if you left out ACID. Eventual consistency is merely a nice way to say that you don't have consistency but you should write and application where you don't need it. Some types of games and social network application does not need consistency. There are even line-of-business systems that don't need it, but those are quite rare. When your client calls when the wrong amount of money is on an account or when an angry poker player didn't get his winnings, the answer should not be that this is how your software was designed.
The right tool for the right job. If you have less than a few million transactions per second, you should use a consistent NewSQL or NoSQL database such as VoltDb (non concurrent Java applications) or Starcounter (concurrent .NET applications). There is just no need to give up ACID these days.