Directly use of PostgreSQL's Multi-Version Concurrency Control - postgresql

Is it possible to directly use Multi-Version Concurrency Control as client of PostgreSQL database? I would like manually browse/add/delete/restore old versions.
My use case requires keeping multiple previous versions of the data (I have a lot of data and a lot of versions).
In official documentation MVCC mechanism is described (https://www.postgresql.org/docs/9.5/static/mvcc-intro.html) but without any APIs to use it directly.

Simply put, no. MVCC isn't intended as a "version repository", and only keeps its "versions" of data long enough to satisfy the requirements of active transactions. Once a transaction concludes any data versions created to ensure data consistency for the transaction are discarded. The "multiple versions" of MVCC refer to differing versions or views of data which may be needed between different transactions. Within a single transaction only a single version of the data is ever visible. If you need to maintain "versions" of your data you'll need to make provisions for doing this yourself.
Best of luck.

Related

Disabling MVCC in Postgres

I've decades of experience with MSSQL but none with Postgres and its MVCC style of concurrency control.
In MSSQL if I had a very large dataset which was read-only, I would set the database to read-only (for safety) and use transaction isolation level read uncommitted, and that should avoid lock contention, which the dataset didn't need anyway.
In Postgres, is there some equivalent? Some way of setting a database to read-only and reassuring PG that is completely safe not to use MVCC, just read without making row copies? Because it seems that MVCC has some considerable overhead which for multiple readers of very large passive data sets seems potentially expensive.
Edit: comments say I misunderstand that copies are only made when writing occurs, not reading as I assumed.
"MVCC" stands for "Multiversion Concurrency Control". Multiple versions of the same table row are only spawned by write activity (mostly UPDATE).
If your database is read-only - enforced or voluntarily, all the same for the purpose of this question - then there cannot be multiple versions of a row, ever. And the question is moot.
No, there is no way to do that, and there is no reason for it either.
Since PostgreSQL, writers will never block readers and vice versa, precisely because of its MVCC implementation that you want to disable. So there is no need for the unsavory crutch of reading uncommitted data.

Why using Locking in MongoDB?

MoongoDB is from the NoSql era, and Lock is something related to RDBMS? from Wikipedia:
Optimistic concurrency control (OCC) is a concurrency control method for relational database management systems...
So why do i find in PyMongo is_locked , and even in driver that makes non-blocking calls, Lock still exists, Motor has is_locked.
NoSQL does not mean automatically no locks.
There always some operations that do require a lock.
For example building of index
And official MongoDB documentation is a more reliable source than wikipedia(none offense meant to wikipedia :) )
http://docs.mongodb.org/manual/faq/concurrency/
Mongo does in-place updates, so it needs to lock in order to modify the database. There are other things that need locks, so read the link #Tigra provided for more info.
This is pretty standard as far as databases and it isn't an RDBMS-specific thing (Redis also does this, but on a per-key basis).
There are plans to implement collection-level (instead of database-level) locking: https://jira.mongodb.org/browse/SERVER-1240
Some databases, like CouchDB, get around the locking problem by only appending new documents. They create a new, unique revision id and once the document is finished writing, the database points to the new revision. I'm sure there's some kind of concurrency control when changing which revision is used, but it doesn't need to block the database to do that. There are certain downsides to this, such as compaction needing to be run regularly.
MongoDB implements a Database level locking system. This means that operations which are not atomic will lock on a per database level, unlike SQL whereby most techs lock on a table level for basic operations.
In-place updates only occur on certain operators - $set being one of them, MongoDB documentation did used to have a page that displayed all of them but I can't find it now.
MongoDB currently implements a read/write lock whereby each is separate but they can block each other.
Locks are utterly vital to any database, for example, how can you ensure a consistent read of a document if it is currently being written to? And if you write to the document how do you ensure that you only apply that single update at once and not multiple updates at the same time?
I am unsure how version control can stop this in CouchDB, locks are really quite vital for a consistent read and are separate to version control, i.e. what if you wish to apply a read lock to the same version or read a document that is currently being written to a new revision? You will obviously see a lock queue appear. Even though version control might help a little with write lock saturation there will still be a write lock and it will still need to work on a level.
As for concurrency features; MongoDB has the ability (for one), if the data is not in RAM, to subside a operation for other operations. This means that locks will not just sit there waiting for data to be paged in and other operations will run in the mean time.
As a side note, MongoDB actually has more locks than this, it also has a JavaScript lock which is global and blocking, it does not have the normal concurrency features of regular locks.
and even in driver that makes non-blocking calls
Hmm I think you might be confused by what is meant as a "non-blocking" application or server: http://en.wikipedia.org/wiki/Non-blocking_algorithm

Is there a reliable (single server) MongoDB alternative?

I like the idea of document databases, especially MongoDB. It allows for faster development as we don't have to adjust database schema's. However MongoDB doesn't support multi-document transactions and doesn't guarantee that modifications get written to disk immediately like normal databases (I know that you can make the time between flushes quite small, but it's still no guarantee).
Most of our projects are not that big that they need things like multi-server environments. So keeping that in mind. Are there any single server MongoDB-like document databases that support multi-document transactions and reliable flushing to disk?
It might be worthwhile to look at ArangoDB. It is a multi model database with a flexible data model for documents, graphs, and key-values. With respect to your specific requirements, ArangoDB database has full ACID transactions which can span over multiple documents in the same collection as well as over multiple collections (see Transactions in ArangoDB). That is, you can execute a group of manipulations to your documents together in a transaction and have guaranteed atomicity and isolation. If you additionally set waitForSync: true
(as described further down on said page), you get a guaranteed sync to disk before your transaction reports completion. Note that this happens automatically if your transaction spans multiple collections.
A very short answer to your specific (but brief) requirements:
Are there any single server MongoDB-like document databases that support multi-document transactions and reliable flushing to disk?
RavenDB [1] provides support for multi-doc transactions [2]. Unfortunately I don't know it handles durability.
CouchDB [3] provides durable writes, but no multi-doc transactions
RethinkDB [4] provides durable writes, but no multi-doc transactions.
So you might wonder what's different about these 3 solutions? Most of the time is their querying support (I'd say RethinkDB has the most advanced one covering pretty much all types of queries: sub-queries, JOINs, aggregations, etc.), their history (read: production readiness -- here I'd probably say CouchDB is in the lead), their distribution model (you mentioned that's not interesting for you), their licensing (RavenDB: commercial, CouchDB: Apache License, Rethinkdb: AGPL).
The next step would be for you to briefly look over their feature set and figure out which one comes close to your needs and give it a try.
I have some experience with CouchDB and ArangoDB which I can share:
You can run CouchDB with durability turned on (delayed_commits = false) so it will also sync your data to disk.
However, this is a global setting so it affects all writes. AFAIK you cannot set it on a per-collection level (the CouchDB term for "collection" would be "database").
Regarding multi-document operations: CouchDB has MVCC, so reading multiple documents from the same database provides a consistent result even in the face of parallel writers.
Writing multiple documents to the same database can also be made transactional for special cases, e.g. when using the bulk documents API.
But there is no way to execute cross-database operations in CouchDB. This is just not intended.
On ArangoDB: in ArangoDB you can turn on immediate syncing to disk on a per-collection level: you can turn it on for collections which you cannot tolerate any data loss in. You can turn immediate syncing off for not-so-important collections for performance reasons. It will then still sync modifications to disk frequently, but not immediately. It provides multi-document and multi-collection transactions.
Checkout the following:
arangodb
rethinkdb
I would suggest you look at Couchbase.
Couchbase can be run single server & you can add nodes later if you want.
Couchbase has memcached integrated so you have fast caching of common data, with a reliable method of writing updates to disk.
They also have a new query language (in development but you can use it now) called NQL ("Nickel") that gives you SQL like access, if that's important to you.
With cross-datacenter replication, you can keep two DBs on different machines or data centers in sync, which is good for having an offsite backup. This also allows you to add elastic search if you wish to have a full text search engine for those types of queries.
In short, Couchbase is a pretty complete solution, all open source and has intelligent (in my opinion) architecture for addressing the typical problems with distributed databases (e.g.: every document is "owned" by a given node, so all changes go to that node, and then the updates are replicated, this is better, I think, than say Riak where you can have updates go to two nodes and then have to be reconciled.)
You can use Couchbase on one node to run the database for many projects by separating the projects into different buckets.
there are so many nosql databases and definitely its hard to choose one. You will have to come up with proper requirements and know exactly what you want.
Following link compared almost all the popular nosql databases
http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
I hope this helps.
Berkeley DB is one we used. It supports ACID. It does have transactions, but as to your term "multidocument" applies, I'm not entirely sure. I imagine so long as each database (i.e. individual document) shares the same BDB environment (i.e. where transactions are stored) then maybe that gets what you want. BDB does have other tradeoffs though. With fully durability and high concurrency, commits are pretty slow.
Give a try to: http://www.orientdb.org/
"OrientDB has the flexibility of the Document databases and the power of the Graph databases to manage relationships. It can work in schema-less mode, schema-full or a mix of both. Supports advanced features such as ACID Transactions, Fast Indexes, Native and SQL queries. It imports and exports documents in JSON. OrientDB uses a new indexing algorithm called MVRB-Tree, derived from the Red-Black Tree and from the B+Tree with benefits of both: fast insertion and ultra fast lookup".
You do not have to adjust schemas in document data stores, but that does not mean you do not need some sort of schema as you probably want to do something meaningful with your data. It appears you would like an ACID database. If you have relational data, and you need transactions with that data, well it sounds very much like you need a relational database.
With "NoSQL" databases like Mongo, you are giving up ACID for features like many writable replicas, sharding, and quick accessing of document data. Sounds like you do not benefit from that so why take the tradeoff? A lot of people have been doing hybrid approaches lately with PostgreSQL by storing documents in a relational table as blobs of JSON. With this, you can have the advantage of storing your data as not strictly structured columns where it is not needed.
So if you have multiple documents that you need to be transactional on update, you can column out the keys, and have a column "document" or something where it is simply a blob of JSON where you serialize and deserialize it. This is not criticizing Mongo or other document stores as a database but it is just not really a good choice for transactional multidocument data. MarkLogic I believe does ACID over multiple documents too.
I think a lot of people find appeal with mongodb due to the schema-less-ness but I think in the end they get bit by trying to shoehorn a relational model into it. So as always the DB choice depends on how your data is.
If I were you I would take a close look at Solr. The underlying data-layer (Lucene) is by far the most mature of the NoSQL databases, and Solr makes installing, configuring, and integrating a single-host lucene store trivial.
In answer to your question, it supports user-delineated transactions. The read-optimised nature of Lucene can make it unsuitable for many applications, but most of those are well suited to Solr/Lucene+[SQL,Cassandra,CouchDB,RDF] depending on the requirements.
Personally I tend to start with Solr+SQL or Solr+RDF, but I know some people who love the whole NodeJS+CouchDB style, and I am convinced of the value of the flexibility that provides.
The bottom line is that there are enough NoSQL and SQL-extensions out there that care about data integrity to satisfy any requirement you have without you having to compromise you or your users' data.
Personally I believe you really need to check what your requirements are.
Due to the dynamics of how the OS of your server works it is complicated to say that everything "immediately" goes to disk even when you tell it to. certainly I know ACID techs like SQL are vulnerable to partial corruption through unfinished business and losing operations within a specific window when a single server goes down, unfortunately this is one of the problems of using a single server; you have no choice but to accept it.
I should note that a transaction does not ensure that your server will receive the entire data before failure ( http://en.wikipedia.org/wiki/Database_transaction ), I mean what if the server dies part way through a transaction?
You can perform a safe rollback based on constraints with transactions but few databases will provide the ability to continue playing the transaction unless they have already received all necessary data for it (which isn't normally the case), by which time the data might even be stale anyway.
In fact due to the weight of some transactions and the amount of queries performed within them I reckon you might get a greater window of operational loss using transactions than you might from the 60ms write to disk window on MongoDB at times. But of course that depends upon abuse, however, just like stored procedures, this abuse is common place.
Transactions shine on cascading deletes and typical scenarios like transferring money in a bank account, however, cascadable deletes are normally better done (as most sites do) by a cronjob with the application marking the row as deleted (to avoid the rollback of a transaction showing the deleted data back to the user again); this way you can do a lot of stuff to ensure consistency that you cannot in real-time do while the user is using your application.
So you should really question why you need a tech and what it will succeed in doing, atm the brevity of your question tells me your not sure about your requirements completely.

Concurrency, Atomicty, and Isolation in Entity Framework

Based on some periodically and concurrently incoming data, I'm performing an operation that will either insert a new row into a table, or update an existing row in the same table. Whether it inserts or updates a row is dependent on the states of the existing rows. So, the result of this operation will be affected by previous runs of this operation, and affect subsequent runs. I need to ensure atomicity/isolation using transactions, or locks, or something. There seems to be so many options and caveats with Entity Framework (and I'm a complete newbie with database stuff in general too) that I have no idea what direction I should be headed. TransactionScope, BeginTransaction, ambient transactions? Serializable or RepeatableRead? SaveChanges and AcceptAllChanges? Do I even need to do anything special? The fact that a new row can be added makes me worry especially about phantom rows, though I barely understand what that means. Any guidance on the subject would be greatly appreciated.
This tutorial may be helpful to you - http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
Quote:
Pessimistic Concurrency (Locking)
If your application does need to prevent accidental data loss in
concurrency scenarios, one way to do that is to use database locks.
This is called pessimistic concurrency. For example, before you read a
row from a database, you request a lock for read-only or for update
access. If you lock a row for update access, no other users are
allowed to lock the row either for read-only or update access, because
they would get a copy of data that's in the process of being changed.
If you lock a row for read-only access, others can also lock it for
read-only access but not for update. Managing locks has some
disadvantages. It can be complex to program. It requires significant
database management resources, and it can cause performance problems
as the number of users of an application increases (that is, it
doesn't scale well). For these reasons, not all database management
systems support pessimistic concurrency. The Entity Framework provides
no built-in support for it, and this tutorial doesn't show you how to
implement it.
Optimistic Concurrency
The alternative to pessimistic concurrency is optimistic concurrency.
Optimistic concurrency means allowing concurrency conflicts to happen,
and then reacting appropriately if they do. For example, John runs the
Departments Edit page, changes the Budget amount for the English
department from $350,000.00 to $100,000.00. (John administers a
competing department and wants to free up money for his own
department.)*
There are code examples for both models in the in the tutorial.

Is it possible to implement Multi-Version Concurrency Control (MVCC) on top of MongoDB?

MongoDB is to me a great database. However there are cases where I really need atomic multi-document transactions. For example to transfer things (like money or reputation) between accounts and this needs to either succeed completely or fail completely.
I wonder if it would be possible to interact with MongoDB through a library implementing the MultiVersion Concurrency Control pattern.
How bad would it be concerning performances?
Would it be possible and profitable to use a hybrid approach, using the 'mongo-mvcc' library only when necessary and the traditional db connection when working only on a single document or would this break the mvcc stuff ?
The simplest way is to use locks (two-phase commit), although this is not very efficient in some cases. For higher concurrency some kind of MVCC can be implemented on the top of Mongo. This article provides a good description:
http://highlyscalable.wordpress.com/2012/01/07/mvcc-transactions-key-value/
Money transaction can be implemented via two-phase commit : http://www.mongodb.org/display/DOCS/two-phase+commit
There is an implementation of MVCC on MongoDB available now on GitHub:
https://github.com/igd-geo/mongomvcc
MongoDB isn't really designed to work with transactions. There is a really good discussion of how you might be able to implement this over at: http://kylebanker.com/blog/2010/04/30/mongodb-and-ecommerce/
You could create a versions collection and have a document for the last committed version.
Atomically update this document with the Read Timestamp (rts) which is not a time based timestamp but a monotonically increasing number when your application code has read a document from a collection.
Before you update a collection, fetch this versions collection document and check if there is a read timestamp below your current transaction, if there is, abort the read or write.
Update the versions document when you want to "publish" with a lastCommit the version of a record and cause it to be visible.
You should only "see" transactions data that are less than or equal to the last committed transaction number.
I implemented MVCC in Java in this repository.
Well when you need real TRANSACTIONS you use RDBMS which are designed to support them :) NoSQLs are faster and more scalable mainly because they don't support transactions.
If you need both maybe it's a good idea to have transactional layer to support transactions and NoSQL layer for other purposes? In some cases it shouldn't be difficult to create a hybrid system using for example MongoDB and PostgreSQL