How concurrent operations are done in B Trees in database management? - rdbms

How concurrent operations are done in B Trees in database management ?? How they are actually implemented and done??

CouchDB is implemented using B+trees. The update mechanics are described quite nicely in this section of The Definitive Guide:
The short answer is that because
CouchDB uses append-only files, the
B-tree root node must be rewritten
every time the file is updated.
However, old portions of the file will
never change, so every old B-tree
root, should you happen to have a
pointer to it, will also point to a
consistent snapshot of the database.

Related

Is it practical to use one table for reading purpose only in a relational database?

I know this question would not be ideal in a real database world, however, I am building a web REST api to server a result that potentially need to join almost every table(i use normalization for sure).
So is it OK to do have one single table to hold the meta data used for reading API, but the table get updated as well when data updated in other tables? I am using PostgreSQL by the way.
This is not very clear so I will state my understanding of the question and give you what I see are the tradeoffs.
First.... It sounds to me like you want to effectively materialize a metadata table and have it live-updated when other tables update. This is not really what the MATERIALIED VIEW support in PostgreSQL is for.
You can use a trigger to update the data whenever something changes. Because of the way PostgreSQL handles things, this leads to more disk and CPU activity, but will probably add more on the latter than the former. So if you hare heavily CPU-bound that will pose more problems than if you are I/O bound.
Using triggers in this way adds a fair bit of complexity to your database and may reduce write scaling a bit but if the data is seldom written but read frequently it may be a clear win.
So in answer to your question, yes it is practical in at least some cases. Whether it is practical in your case, that will be for you to decide.

Can Solr be used as an alternative to cache?

We are using Postgresql for persistence, ehcache as our cache. We have recently introduced Solr for enabling faster searches (for fuzzy and exact searches).
So my question is : Can Solr be tuned in such a way that it can replace ehcache? (say by running in cloud-mode or so)
Just to add some context to the question:
We have a bunch of tables to store contact information. Ehcache is currently being used to get these contacts for a given ID. Solr will be used extensively for search related operations. Since Solr is already doing the search... why not replace Ehcache (as in some way it is like : searching with a given ID) provided the performance is not compromised.
In additions to other reasons why No would be an answer, is also the granularity of changes. Lucene (underlying library) stores data in a read-only form. Solr adds updatable documents on top of that, but making them visible is still a heavy operation. Recent versions of Solr made it easier and faster with soft-commits, but the price of making a change visible is still non-trivial.
So, it is really not optimized for updating/caching a single value. The data structures are optimized for a multiple document update and then fast search with caching over that temporarily read-only state.
I'll take a shot, but it's unlikely anyone will have a definitive answer to such a vague question. https://lucidworks.com/blog/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/ is four years old now but still relevant. The answers will depend entirely on what you need to do.
So, some generic statements:
SolrCloud or not is unlikely to be an issue that effects your decision. Use it if you want Solr to handle replication and index managment. Don't, if you'd rather do it yourself.
Solr is fast, (given enough memory) so it's certainly possible you could get rid of a caching layer. Only you know your requirements though.
Read through https://wiki.apache.org/solr/SolrCaching, particularly you might be interested in the QueryResultCache.
The simple answer is: No
Reason:
cache is in memory, but the index of solr is on disk (except the part been cached).
Reading memory is over thousands of times quicker than reading disk.
So, solr can't be used as a general purpose cache, in that case ehcache or memcached or redis would be a better choice.
What solr good at is its search ability, analyzer & tokenizer & filter, but not cache.

Why using Locking in MongoDB?

MoongoDB is from the NoSql era, and Lock is something related to RDBMS? from Wikipedia:
Optimistic concurrency control (OCC) is a concurrency control method for relational database management systems...
So why do i find in PyMongo is_locked , and even in driver that makes non-blocking calls, Lock still exists, Motor has is_locked.
NoSQL does not mean automatically no locks.
There always some operations that do require a lock.
For example building of index
And official MongoDB documentation is a more reliable source than wikipedia(none offense meant to wikipedia :) )
http://docs.mongodb.org/manual/faq/concurrency/
Mongo does in-place updates, so it needs to lock in order to modify the database. There are other things that need locks, so read the link #Tigra provided for more info.
This is pretty standard as far as databases and it isn't an RDBMS-specific thing (Redis also does this, but on a per-key basis).
There are plans to implement collection-level (instead of database-level) locking: https://jira.mongodb.org/browse/SERVER-1240
Some databases, like CouchDB, get around the locking problem by only appending new documents. They create a new, unique revision id and once the document is finished writing, the database points to the new revision. I'm sure there's some kind of concurrency control when changing which revision is used, but it doesn't need to block the database to do that. There are certain downsides to this, such as compaction needing to be run regularly.
MongoDB implements a Database level locking system. This means that operations which are not atomic will lock on a per database level, unlike SQL whereby most techs lock on a table level for basic operations.
In-place updates only occur on certain operators - $set being one of them, MongoDB documentation did used to have a page that displayed all of them but I can't find it now.
MongoDB currently implements a read/write lock whereby each is separate but they can block each other.
Locks are utterly vital to any database, for example, how can you ensure a consistent read of a document if it is currently being written to? And if you write to the document how do you ensure that you only apply that single update at once and not multiple updates at the same time?
I am unsure how version control can stop this in CouchDB, locks are really quite vital for a consistent read and are separate to version control, i.e. what if you wish to apply a read lock to the same version or read a document that is currently being written to a new revision? You will obviously see a lock queue appear. Even though version control might help a little with write lock saturation there will still be a write lock and it will still need to work on a level.
As for concurrency features; MongoDB has the ability (for one), if the data is not in RAM, to subside a operation for other operations. This means that locks will not just sit there waiting for data to be paged in and other operations will run in the mean time.
As a side note, MongoDB actually has more locks than this, it also has a JavaScript lock which is global and blocking, it does not have the normal concurrency features of regular locks.
and even in driver that makes non-blocking calls
Hmm I think you might be confused by what is meant as a "non-blocking" application or server: http://en.wikipedia.org/wiki/Non-blocking_algorithm

NoSQL databases: what about read consistency?

From what I can make out NoSQL databases might be a good option for high intensity data read applications, but are a less good fit if you need to do also do a lot data updates and transactionality is very important to you (what with there being no ACID compliance). Right? Too simplistic maybe.
But anyway, supposing I'm partly right at least I'm now concerned about how NoSQL databases maintain a "read consistent" view of the data that you're either reading or writing. Or do they? And if they don't, isn't that a really big problem?
I mean, if the data that you're reading (or updating) is changing as you read it then you're potentially going to get an inconsistent/dirty result set. Coming from an Oracle rdbms background, where all this is just handled for you, I find it confusing how the lack of read consistency is anything but a big problem. Could well be though that I'm missing some key point about all this. Can someone set me straight?
I am a developer on the Oracle NoSQL Database and will answer your question relative to that particular NoSQL system.
The Oracle NoSQL Database API allows the programmer to specify -- with each API call -- the level of read consistency. The four possible values, ranging from strictest to loosest, are Absolute, Time, Version, and None. Absolute says to always read from the replication master so that the most current value is returned. "Time" says that the system can return a value from any replica that is at least within a certain time delta of the master (e.g. read the value from any replica that is within 2 seconds of the master). Every read and write call to the system returns a "version handle". This version handle may be passed into any read call when Consistency.Version is specified and it tells the system to read from any replica which is at least as up to date as that version. This is useful for Read Modify Write (aka CAS) scenarios. The last value, Consistency.None says that any replica can be used (i.e. there is no consistency guaranteed).
I hope this is helpful.
Charles Lamb
A NoSQL database can be read-consistent, although it's generally not a big problem if it's not strictly so, check out the CAP theorem. There's been quite a lot of research done in this area, I recommend reading Amazon's Dynamo paper for a quick view of some of the problems and solutions faced by distributed systems like NoSQL databases.
MongoDB allows the application to select the desired level of read consistency using "write concern". This concept allows your application to block until a certain condition is met for a given write.
By way of example, you can consider any write successful so long as the operation is communicated to a master server. Alternatively, you can block until a write has been propagated to a majority of nodes in your replica set. In this way, you can mix performance/consistency to taste.
It depends on the NoSQL database you are using as each implements a different strategy. You can read, for example, Riak's explanation of their "eventual consistency" model or Lars Hofhansel's writeup on ACID in HBase

Disadvantages of CouchDB

I've very recently fallen in love with CouchDB. I'm pretty excited by its enormous benefits and by its beauty. Now I want to make sure that I haven't missed any show-stopping disadvantages.
What comes to your mind? Attached is a list of points that I have collected. Is there anything to add?
Blog posts from as late as 2010 claim "not mature enough" (whatever that's worth).
Slower than in-memory DBMS.
In-place updates require server-side logic (update handlers).
Trades disk vs. speed: Databases can become huge compared to other DBMS (compaction functionality exists, though).
"Only" eventual consistency.
Temporary views on large datasets are very slow.
Replication of large databases may fail.
Map/reduce paradigm requires rethinking (only for completeness).
The only point that worries me is #3 (in-place updates), because it's quite inconvenient.
The data is in JSON
Which means that documents are quite large (BigData, network bandwidth, speed), and having descriptive key names actually hurts, since they add up to the document size.
No built in full text search
Although there are ways: couchdb-lucene, elasticsearch
plus some more:
It doesn't support transactions
It means that enforcing uniqueness of one field across all documents is not safe, for example, enforcing that a username is unique. Another consequence of CouchDB's inability to support the typical notion of a transaction is that things like inc/decrementing a value and saving it back are also dangerous. There aren't many instances that we would want to simply inc/decrement some value where we couldn't just store the individual documents separately and aggregate them with a view.
Relational data
If the data makes a lot of sense to be in 3rd normal form, and we try to follow that form in CouchDB, we are going to run into a lot of trouble. A possible way to solve this problem is with view collations, but we might constantly going to be fighting with the system. If the data can be reformatted to be much more denormalized, then CouchDB will work fine.
Data warehouse
The problem with this is that temporary views in CouchDB on large datasets are really slow. Using CouchDB and permanent views could work quite well. However, in most of cases, a Column-Oriented Database of some sort is a much better tool for the data warehousing job.
But CouchDB Rocks!
But don't let it discorage you: NoSQL DBs that are written in Erlang (CouchDB, Riak) are the best, since Erlang is meant for distributed systems. Have fun with Couch!
2 more things, which make me cry when using CouchDB (though it's awesome):
It is not designed for frequently updated data
It doesn't have built-in fulltext search
Lack of reader ACLs (does exist for writers, however)
As an old Lotus Domino pro I was looking to CouchDB as an alternative for a new project I'm kicking off and found the limits on readers to be very weak in Couch vs. Domino. In my app security is an important consideration and Couch would require a middleware layer to handle reader security.
If you have database in which it's okay that all defined users can see all the documents, then Couch looks like an interesting platform.
If restricting reads is needed then you'll need to look to a middleware solution or consider another alternative.
Note to CouchDB developers: Improve the platform security options. I realize they will diminish performance when used but note that and make the option available.
Now back to determining which database to use...
currently no support for ad-hoc queries (might change with advent of UnQL)
lack of binary protocol support for faster communication
It's nothing to do with CouchDB itself, but being a relative newcomer on the scene means that most sysadmins are still unfamiliar with it and won't allow it anywhere near "their" data centers. If you're in a situation where you're deploying to an environment you don't control yourself, this can be quite the battle.
Lack of support for data archiving - No official support for data
archiving is provided with couch db open source distribution.
Deleting records from db is not straightforward
No option to set a expire (TTL) flag for documents