Why are RDBMS considered Available (CA) for CAP Theorem - rdbms

If I understand the CAP Theorem correctly, availability means that the cluster continues to operate even if a node goes down.
I've seen a lot of people (http://blog.nahurst.com/tag/guide) list RDBMS as CA, but I do not understand how RBDMS is available, as if a node goes down, the cluster must go down to maintain consistency.
My only possible answer to this has been that most RDBMS are a single node, so there is no "non-failing" node. But, this seems to be a technicality, not true 'availability' and definitely not high availability.
Thank you.

First of all, let me clarify and state that the consistency in RDBMS is different than consistency in distributed systems. RDBMS (single system) applies consistency to transactional consistency, where as in distributed systems consistency means view from anywhere in the system (read from any node) is consistent. So RDMBS single node cannot be discussed with regards to CAP theorem. It is like comparing apple to orange.
RDBMS with master-slave can be compared to distributed systems. Here RDBMS can be configured to CA/CP or AP. MySQL for example, provides a way to configure the system in a way that if there is a quorum loss (not enough secondary available for commit log replication), the cluster is not available (CP system). MySQL also provides a configuration to allow the cluster to operate as long as master is available (CA system) with the potential of data loss. SQL Server AlwaysOn is an AP system, because commit log replication is asynchronous (even on sync replicas).
So RDBMS can be any of CA, CP or AP in a distributed world.

I believe you are misunderstanding the relation between CAP-Availability and node-UP/DOWN. Availability is about providing an answer to every received query - when a node is down it cannot receive queries, therefore if you bring down parts of or the entire cluster, the CAP-Availability property holds. Although this may sound counter intuitive at first glance, by shutting down nodes you are holding on to CAP-Availability and dropping CAP-Partition tolerance instead. I've recently posted an answer whose examples provide some clarification.
In a nutshell: A partition occurs that isolates node N. If N receives a request it can either: i) answer which grants availability but drops consistency because N is out of sync; ii) do not answer to avoid replying with an out-of-date result, thereby dropping availability because we received a request but issued no reply for it.
Alternatively we can shutdown N as soon as it becomes disconnected from the rest of the cluster which allows us to keep C and A, but drop P, because: i) N will not receive any requests; ii) all received requests will be performed to the fully connected and consistent cluster, hence they will all be answered with consistent values; iii) the cluster is not partition tolerant because it does not tolerate partitions - instead it shutdowns partitioned nodes.

In CAP Theorem P is for Partition tolerance , which is the ability of system to handle partitions(partitions are isolated clusters - due to network failure or any other reason ..).
In a distributed network to handle a partition , system has to pick either Consistency or Availability.
In case of RDBMS there is no chance for partitions (assuming not distributed which is normal case) ,So Those will be always CA.

Related

Failover and strong consistency in Couchbase

We have a three-node Couchbase cluster with two replicas and durability level MAJORITY.
This means that the mutation will be replicated to the active node(node A) and to one of the two replicas(node B) before it is acknowledged as successful.
In terms of consistency, what will happen if node A becomes unavailable and the hard failover process promotes node C replica before node A manages to replicate the mutation to node C?
According to the docs Protection Guarantees and Automatic Failover, write is durable but will be available immediately?
Answered by #ingenthr here.
Assuming the order is that the client gets the acknowledgment of the
durability, then the hard failover is triggered of your node A, during
the failover the cluster manager and the underlying data service will
determine whether node B or C should be promoted to active for that
vbucket (a.k.a. partition) to satisfy all promised durability. That
was actually one of the trickier bits of implementation.
“Immediately” is pretty much correct. Technically it does take some
time to do the promotion of the vbucket, but this should be very short
as it’s just metadata checks and state changes and doesn’t involve any
data movement. Clients will need to be updated with the new topology
as well. How long is a function of the environment and what else is
going on, but I’d expect single-digit-seconds or even under a second.
Assuming you’re using a modern SDK API 3.x client with best-effort
retries, it will be mostly transparent to your application, but not
entirely transparent since you’re doing a hard failover.
Non-idempotent operations, for example, may bubble up as errors.

How CA distributed system according to Cap Theorem can exist

How can a distributed system be consistent and available (CA)?
Because I would argue when a network partition occurs, CA cannot be possible in a way where every node of the network, even the partioned nodes that users are connected to, continue to be available and answer with consistent data.
It can't.
As often mentioned, the CAP theorem in its original form is a little misleading. It can be restated as
in the presence of the network partition, a distributed system is either available or consistent
so you are right. Generally, systems cannot be classified as CA, CP or AP only, since partition tolerance is a property of the system, which describes what to choose in case of a network partition. So it is possible that a system can behave according to AP sometimes, and CP other times (however it is not common).
Another interesting part is that RDBMS databases are often at the CA side of the triangle. This is only the case in a single node setup. Even with master (write) - slave (read) setup, the system is not CA (or if it is termed "CA" for some reason, and cannot recover from network partitions, then a split-bran scenario may happen, a new master is elected for the partition, and chaos ensues, possibly breaking the consistency of the system).
Useful read: https://codahale.com/you-cant-sacrifice-partition-tolerance/.
It can, but it won't.
The CAP theorem reasons about guarantees when one or more nodes get isolated from the rest of the cluster. In such cases a node has three options which result in the three known CAP trade-offs: i) it keeps responding to any received requests AP; ii) it no longer responds to received requests until it is again able to reach the others CP; iii) it shuts down before receiving any requests to eliminate the partition along with it CA.
In other words you can achieve CA by having your nodes shutting down instead of tolerating the partition but bear in mind that partitions are likely to keep happening hence this will converge to the scenario in which you have a single node in your cluster and I assume this is the opposite of what you want, i.e. having a cluster with multiple nodes is kind of the whole point.
Therefore in practice you end up choosing between CP and CA. See this answer for more illustrative examples.
Dr. Stonebraker says: The guidance from the CAP theorem is that you
must choose either A or C, when a network partition is present. As is
obvious in the real world, it is possible to achieve both C and A in
this failure mode.
See this for thoughts on why CA can exist:
CA is a specification of the operating range: you specify that the
system does not work well under partition or, more precisely, that
partitions are outside the operating range of the system.
My background is far from these theoretical considerations and I must say it is highly confusing. I am researching distributed Blockchain systems and I don't see why those "generalized" definitions of C, A, P must always apply. If let's say 5% of nodes fail or are otherwise partitioned, the consensus still functions. If an end user is connected to a partitioned node, the node could let the user know it lost connection. I don't even see how any major Blockchain network is CP without defining conditions such as "if a certain amount of nodes fail or get partitioned, the consensus halts".

Is a replication log necessary to achieve linearizability in distributed store

The Raft algorithm used by etcd and ZAB algorithm by Zookeeper are both using replication log to update a state machine.
I was wondering if it's possible to design a similar system by simply using leader election and versioned values. And why those system decided to use a replication log.
I my example if we have the following setup
machine A (Leader), contain version 1
machine B (Follower), contain version 1
machine C (Follower), contain version 1
And the write would go like this:
Machine A receive Write request and store pending write V2
Machine A send prepare request to Machine B and Machine C
Followers (Machine B and Machine C) send Acknowledge to leader (Machine A)
After Leader (machine A) receive Acknowledge from quorum of machine, it know V2 is now commited and send success response to client
Leader (machine a) send finalize request to Follower (machine A and Machine B) to inform them that V2 is commited and V1 could be discarded.
For this system to work, On leader change after acquiring leader Lease the leader machine have to get the latest data version by reading from a quorum of node before accepting Request.
The raft algorithm in ETCD and ZAB algorithm in Zookeeper are both using replication log to update a state machine.
I was wondering if it's possible to design a similar system by simply using leader election and versioned values.
Yes, it's possible to achieve consensus/linearizability without log replication. Originally the consensus problem was solved in the Paxos Made Simple paper by Leslie Lamport (1998). He described two algorithms: Single Decree Paxos to to build a distributed linearizable write-once register and Multi-Paxos to make a distributed state machine on top of append only log (an ordered array of write-once registers).
Append only logs is much more powerful abstraction than write-once registers therefore it isn't surprising that people chose logs over registers. Besides, until Vertical Paxos (2009) was published, log replication was the only consensus protocol capable of cluster membership change; what is vital for multiple tasks: if you can't replace failed nodes then eventually your cluster becomes unavailable.
Yet Vertical Paxos is a good paper, it was much easier for me to understand the Raft's idea of cluster membership via the joint consensus, so I wrote a post on how to adapt the Raft's way for Single Decree Paxos.
With time the "write-once" nature of the Single Decree Paxos was also resolved turning write-once registers into distributed linearizable variables, a quite powerful abstraction suitable for the many use cases. In the wild I saw that approach in the Treode database. If you got interested I blogged about this improved SDP in the How Paxos Works post.
So now when we have an alternative to logs it makes sense to consider it because log based replication is complex and has intrinsic limitations:
with logs you need to care about log compaction and garbage collection
size of the log is limited by the size of one node
protocols for splitting a log and migration to a new cluster are not well-known
And why those system decided to use a replication log.
The log-based approach is older that the alternative, so it has more time to gain popularity.
About your example
It's hard to evaluate it, because you didn't describe how the leader election happens and the conflicts between leaders are resolved, what is the strategy to handle failures and how to change membership of the cluster.
I believe if you describe them carefully you'll get a variant of Paxos.
Your example makes sense. However, have you considered every possible failure scenario? In step 2, Machine B could receive the message minutes before or after Machine C (or vice versa) due to network partitions or faulty routers. In step 3, the acknowledgements could be lost, delayed, or re-transmitted numerous times. The leader could also fail and come back up once, twice, or potentially several times all within the same consensus round. And in step 5, the messages could be lost, duplicated, or Machine A & C could receive the notification while B misses it....
Conceptual simplicity, also known as "reducing the potential points of failure", is key to distributed systems. Anything can happen, and will happen in realistic environments. Primitives, such as replicated logs based on consensus protocols proven to be correct in any environment, are a solid foundation upon which to build higher levels of abstraction. It's certainly true that better performance or latency or your "metric of interest" can be achieved by a custom-built algorithm but ensuring correctness for such an algorithm is a major time investment.
Replicated logs are simple, easily understood, predictable, and fall neatly into the domain of established consensus protocols (paxos, paxos-variants, & raft). That's why they're popular. It's not because they're the best for any particular application, rather they're understood and reliable.
For related references, you may be interested in Understanding Paxos and Consensus in the Cloud: Paxos Systems Demystified

Why isn't RDBMS Partition Tolerant in CAP Theorem and why is it Available?

Two points I don’t understand about RDBMS being CA in CAP Theorem :
1) It says RDBMS is not Partition Tolerant but how is RDBMS any less Partition Tolerant than other technologies like MongoDB or Cassandra? Is there a RDBMS setup where we give up CA to make it AP or CP?
2) How is it CAP-Available? Is it through master-slave setup? As in when the master dies, slave takes over writes?
I’m a novice at DB architecture and CAP theorem so please bear with me.
It is very easy to misunderstand the CAP properties, hence I'm providing some illustrations to make it easier.
Consistency: A query Q will produce the same answer A regardless the node that handles the request. In order to guarantee full consistency we need to ensure that all nodes agree on the same value at all times. Not to be confused with eventual consistency in which the network moves towards having all data consistent but there are periods of time in which it is not.
Availability: If the distributed system receives query Q it will always produce an answer for that query. This should not be confused with "high-availability", this is not about having the capacity to process a higher troughput of queries, it is about not refusing to answer.
Partition Tolerance: The system continues to function despite the existence of a partition. This is not about having mechanisms to "fix" the partition, it is about tolerating the partition, i.e. continuing despite the partition.
Note that the following examples do not cover all possible scenarios. Consider the following caption:
An example for CP:
The system is partition tolerant because its nodes keep accepting requests despite the partition; it is consistent because the only nodes providing answers are those that maintain a connection to the master node that handles all the write requests; it is not available because the nodes in the other partition do not provide an answer to the queries they receive.
Examples for AP:
Either because (respectively) we have the slave nodes replying to requests regardless whether they able to reach master or because the slave nodes in the other partition elect a new master, or because we have a masterless cluster, availability is achieved because all questions are getting an answer - consistency is dropped because both partitions are replying while potentially yielding different states.
Examples for CA:
If we disconnect nodes when a partition occurs, we can ensure that we have at most one partition which ultimately means that the network is not partitioned anymore, or simply there is no service at all. This is the opposite of partition tolerance, because the system is avoiding the partition instead of functioning despite it. Consistency and availability holds in these partially or fully disconnected systems because all working nodes (if any) have the same state and all received queries (if any) will get an answer - shutdown nodes do not receive queries.
To answer the questions:
Under default configurations, databases such as Cassandra and MongoDB are partition tolerant because they do not shutdown nodes to cope with partitions, whereas RDBMS such as MySQL do.
Availability has very little to do with master/slave setup, e.g. Cassandra is masterless and very available because it doesn't really matter which node dies. As for availability in a master/slave setup, there is no reason to stop responding to all queries when master is dead, but you may need to suspend write operations while electing a new one.
A lot of databases now actually have different configurations and depending on the settings you set, it can be either CA, CP, AP, etc but can not achieve all three at the same time. Some databases actually make an effort to support all three but still prioritizes them in a certain way.
For example, MySQL can be CP and CA depending on the configurations. By default, it is CA because it follows a master slave paradigm which data is replicated to the slaves. Partition tolerance is sacrificed in the event that a set of the slaves loses the connection to the master and therefore decides to elect a new master creating two masters with their own set of slaves.
However, MySQL also has another configuration which is a clustered configuration. It prioritizes CP over availability eg. the cluster will shutdown if there are not enough live nodes to serve all the data.
There are probably more configurations for MySQL that makes it satisfy other CAP theorem combinations but overall, I just wanted say that it depends on what your system requires. Sometimes databases are better for one configuration vs another so its best to see what kinds of problems that may also occur in using a certain configuration.
As for implementing the CAP theorem, I would advise taking a further look into different databases and how they implement the priorities for the CAP theorem. There are just too many different ways of implementing them eg. generally, the master slave model is used for CA systems, the hash ring for AP systems, etc.
CAP theorem is problematic and it applies only to distributed database systems. When you have distributed databases then network partition and node crashes can happen. And when network partition happens you must have partition tolerance (the P of your CAP).
So to answer your question number 1) It’s either CP or AP. It can be configured as Will mentioned.
More about why partition tolerance is a must:
https://codahale.com/you-cant-sacrifice-partition-tolerance/
More about problems around CAP theorem:
https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html
I agree that RDBMS can have all the properties of CAP. I have started studying noSQL DBs and had prior experience with IBM DB2.
Here is how IBM DB2 satisfies all the 3 CAP properties
C : Consistency : Every relational database satisfies this due to the transactional nature of RDBMS.
A : Availability : Availability means that when a query is made for a data that exists, it should be returned. Again, a relational database is designed to do this easily.
P : Partition Tolerance : This is the most interesting one. From DB2 stand point, in the application that I was working on, we had 2 databases spread across different data centres. One was the primary and communicated with the secondary via heartbeats. Each of these primary and secondary databases, had 12 physical instances where data was distributed on the basis of some predefined logic. If the primary goes down, the secondary detects this and takes the place of primary. Since the primary and secondary were always maintained in sync, data remains consistent as well.
This is how I think that RDBMS satisfies all 3 properties of CAP Theorem.
I may be wrong, and open to discussion on this.

NoSQL: What does it mean for MongoDB or BigTable to not always be "Available"

Reading Nathan Hurst's Visual Guide to NoSQL Systems, he includes the CAP triangle:
Consistency
Availibility
Partition Tolerance
With SQL Server being an AC system, and MongoDB being a CP system.
These definitions from come a UC Berkley professor Eric Brewer, and his talk at PODC 2000 (Principles of Distributed Computing):
Availability
Availability means just that - the service is available
(to operate fully or not as above). When you buy the book you want to
get a response, not some browser message about the web site being
uncommunicative. Gilbert & Lynch in their proof of CAP Theorem make
the good point that availability most often deserts you when you need
it most - sites tend to go down at busy periods precisely because they
are busy. A service that's available but not being accessed is of no
benefit to anyone.
What does it mean, in the context of MongoDB, or BigTable, for the system to not be "available"?
Do you go to connect (e.g. over TCP/IP), and the server does not respond? Do you attempt execute a query, but the query never returns - or returns an error?
What does it mean to not be available?
Availability in this case means that in the event of a network partition, the server that a client connects to may not be able to guarantee the level of consistency that the client expects (or that the system is configured to provide).
Assuming that you have 3 nodes, A, B, and C, in a hypothetical distributed system. A, B, and C are each running in their own rack of servers, with 2 switches between them:
[Node A] <- Switch #1 -> [Node B] <- Switch #2 -> [ Node C ]
Now assume that said system is set up so that it is GUARANTEED that any write will go to at least 2 nodes before it is considered committed. Now, lets assume that switch #2 gets unplugged, and some client is connected to node C:
[Node A] <- Switch #1 -> [Node B] [ Node C ] <-- Some client
That client will not be able to issue Consistent writes, because the distributed system is currently in a partitioned state (namely, Node C cannot contact enough other nodes to guarantee the 2-node consistency required).
I'd add to this that some NoSQL databases allow very dynamic selection of CAP attributes. Cassandra, for instance, allows clients to specify the number of servers that a write must go to before it is committed on a per-write basis. Writes going to a single server are "AP", writes going to a quorum (or all) servers are more "CA".
EDIT - from the comments below:
In MongoDB you can only have master/slave configuration within a replica set. What this means is that the choice of AP vs CP is made by the client at query time. The client can specify slaveOk, which will read from an arbitrarily selected slave (which may have stale data): mongodb.org/display/DOCS/…. If the client is not OK with stale data, don't specify slaveOk and the query will go to the master. If the client cannot reach the master, then you'll get an error. I'm not sure exactly what that error will be.
The CAP theorem applies to distributed computer systems. MongoDB supports two distinct forms of distributed computing: sharding for horizontal scaling and replica sets for failover/high availability. The two can be used together or independently. I think the CAP theorem applies slightly differently to the two forms:
Sharding level - MongoDB stores data on at most one authoritative shard.
Strong Consistency: A piece of data exists on at most one shard. Incorrect/stale data does not exist.
Strong Partition-tolerance: Even if network partitioned, requests never return incorrect/stale data. Shards continue working independent of other shards.
Weak Availability: Reads/writes of data on a downed shard will fail.
Replica set level - MongoDB replicates data within a shard, ensuring consistency via a single, authoritative primary node.
Strong Consistency: All reads/writes handled by the primary node.
Strong Partition-tolerance: If enough nodes become unreachable, a new primary is elected. The election process ensures there is always at most one primary node.
Weak Availability: Reads/writes will fail when no primary exists, even though the data could be accessed via secondary nodes.
The slaveOK/ReadPreference.SECONDARY option sacrifices some consistency (stale data can be read) for increased performance and availability.