Commit Failure in Paxos - distributed-computing

I am new to the Distributed System and Consensus Algorithm. I understand how it works but I am confused by some corner cases: when the acceptors received an ACCEPT for an instance but never heard back about what the final consensus or decision is, what will the acceptors react. For example, the proposer is robooting or failed during commit or right after it sends all the ACCEPT. What will happen in this case?
Thanks.

There are two parts to this question: How do the acceptors react to new proposals? and How do acceptors react if they never learn the result?
In plain-old paxos, the acceptors never actually need to know the result. In fact it is perfectly reasonable that different acceptors have different values in their memory, never knowing if the value they have is the committed value.
The real point of paxos is to deal with the first question. And seeing that the acceptor never actually knows if it has the committed value, it has to assume that it could have the committed but be open to replacing its value if it doesn't have the committed value. How does it know? When receiving a message the proposer always compares the round number and if that is old then the acceptor signals to the proposer that it has to "catch up" first (a Nack). Otherwise, it trusts that the proposer knows what it is doing.
Now for a word about real systems. Some real paxos systems can get away with the acceptors not caring what the committed value is: Paxos is just there to choose what the value will be. But many real systems use Paxos & Friends to make redundant copies of the data for safekeeping.
Some paxos systems will continue paxos-ing until all the acceptors have the data. (Notice that without interference from other proposers, an extra paxos round copies the committed value everywhere.) Others systems are wary about interference from other proposers and will use a different Committed message that teach the acceptors (and other Learners) what the committed value is.
But what happens if the proposer crashes? A subsequent proposer can come along and propose a no-op value. If the subsequent proposer Prepares (Phase 1A) and can communicate with ANY of the acceptors that the prior proposer successfully sent Accepts to (Phase 2A) then it will know what the prior proposer was trying to do (via the response in Phase 1B: PrepareAck). Otherwise a harmless no-op value gets committed.

when the acceptors received an ACCEPT for an instance but never heard back about what the final consensus or decision is, [how] will the acceptors react.
The node sending the value usually learns its value is fixed by counting positive responses to its ACCEPT messages until it sees a majority. If messages are dropped they can be resent until enough messages get through to determine a majority outcome. The acceptors don't have to do anything but accurately follow the algorithm when repeated messages are sent.
For example, the proposer is robooting or failed during commit or right after it sends all the ACCEPT. What will happen in this case?
Indeed this is an interesting case. A value might be accepted by a majority and so fixed but no-one knows as all scheduled messages have failed to arrive.
The responses to PREPARE messages have the information about the values already accepted. So any node can issue PREPARE messages and learn if a value has been fixed. That is actually the genius of Paxos. Once a value is accepted by a majority if is fixed because any node running the algorithm must keep choosing the same value under all message loss and crash scenarios.
Typically Paxos uses a stable leader who streams ACCEPT messages for successive rounds with successive values. If the leader crashes any node can timeout and attempt to lead by sending PREPARE messages. Multiple nodes issuing PREPARE messages trying to lead can interrupt each other giving live-lock. Yet they can never disagree about what value is fixed once it is fixed. They can only compete to get their own value fixed until enough messages get through to have a winner.
Once again acceptor nodes don’t have to do anything other than follow the algorithm when a new leader takes over from a crashed leader. The invariants of the algorithm mean that no leader will contradict any previous leader as to the fixed value. New leaders collaborate with old leaders and acceptors can simply trust that this is the case. Eventually enough messages will get through for all nodes to learn the result.

Related

paxos algorithm - how does the propose stage work?

I am looking at the pseudocode for the PROPOSE stage of the paxos algorithm: https://www.cs.rutgers.edu/~pxk/417/notes/paxos.html
did I receive PROMISE responses from a majority of acceptors?
if yes
do any responses contain accepted values (from other proposals)?
if yes
val = accepted_VALUE // value from PROMISE message with the highest accepted ID
if no
val = VALUE // we can use our proposed value
send PROPOSE(ID, val) to at least a majority of acceptors
If one of the peers has previously accepted a value (accepted_VALUE), what happens to the value that the proposer is trying to propose (VALUE)?
As I understand it looking at the pseudocode, it gets discarded? That seems like a loss of information...
The proposer does discard its value when an acceptor responds with a value.
I think of it like this: Paxos is a cooperation protocol, much like a wait-free-lock-free algorithm. The proposer's job is not to ensure its value is chosen, but to help the process along. When a proposer sees that it was beat out by another proposer, it helps to replicate that other proposers.
In a similar vein, you can think of it as the proposer is continuing the work of a prior, potentially dead proposer.
You can see this more clearly in the simpler, non-consensus Attiya/Bar-Noy/Dolev (ABD) protocol. Even when reading in ABD the "proposer" re-writes the value to the peers to ensure the value is distributed across the system.
ABD: "Sharing memory robustly in message-passing systems", H. Attiya, A. Bar-Noy, and D. Dolev, 1995

How to handle reordered RPC in raft

When implementing the Raft algorithm, I found there is a situation that I think may or may not do harm to the cluster.
It is reasonable to assume some AppendEntriesRPC from Leader are received reordered(network delay or other reasons). Consider the Leader send a heartbeat AppendEntriesRPC to peer A, with prev_log_index = 1, and then send another AppendEntriesRPC with entry 2, and then it crash(I ensure this happen immediately by a callback in my test). If the two RPCs are handled in the order which they are sent, entry 2 will be inserted successfully. However, if the heartbeat RPC is delayed, then peer A will firstly insert entry 1 and respond to the Leader. Then comes the delayed heartbeat, peer A will erase entry 2, because the entry conflict with the Leader's prev_log_index = 1. So peer A erases a log entry by mistake.
To dig a little deeper, if the Leader doesn't crash immediately, will it fix this? I think if peer A respond to the delayed heartbeat correctly, the Leader will find out and fix it up in some later RPCs.
However, what if peer A's response to entry 2 lead to the commit_index advancing? In this case peer A vote to advance commit_index to 2, even though it actually does not have entry 2. So there may not enough votes for this advancing. When the Leader crashs now, a node with less logs will be elected as Leader. And I do encounter such situation during my testing.
My question is:
Is my reasoning correct?
If reordered RPC a real problem, how should I solve that? Is indexing and caching all RPCs, and force them be handled one by one a good solution? I found it hard to implement in gRPC.
Raft assumes an ordered stream protocol such as TCP. That is, if a message arrives out of order then it is buffered until its predecessor arrives. (This behavior is why TCP exists: because each individual packet can go through separate routes between servers and there is a high chance of out-of-order messages, and most applications prefer the ease-of-mind of a strict ordering.)
Other protocols, such as plain old Paxos, can work with out-of-order messages, but are typically much slower than Raft.

Paxos algorithm in the context of distributed database transaction

I had some confusion about paxos, specifically in the context of database transactions:
In the paper "paxos made simple", it says in the second phase that the proposer needs to choose one of the values with the highest sequence number which one of the acceptors has accepted before (if no such value exists, the proposer is free to choose the original value is proposed).
Questions:
On one hand, I understand it does so to maintain the consensus.
But on the other hand, I had confusion about what the value actually is - what's the point of "having to send accepters the value that has been accepted before"?
In the context of database transactions, what if it needs to commit a new value? Does it need to start a new instance of Paxos?
If the answer to the above question is "Yes", then how does accepters reset the state? (In my understanding, if it doesn't reset the state, the proposer would be forced to send one of the old values that has been accepted before rather than being free to commit whatever the new value is.)
There are different kinds of paxos in the "Paxos made simple" paper. One is Paxos (plain paxos, single-degree paxos, synod), another is Multi-Paxos. From an engineer's point of view, the first is distributed write-once register and the second is distributed append only log.
Answers:
In the context of Paxos, the actual value is the value that was successfully written to the write-once register, it happens when a majority of the acceptors accept value of the same round. In the paper it was shown that the new chosen value always will be the same as previous (if it was chosen). So to get the actual value we should initiate a new round and return the new successfully written value.
In the context of Multi-Paxos the actual value is the latest value added to the log.
With Multi-Paxos we just add a new value to the log. To read the current value we read the log and return the latest version. On the low level Multi-Paxos is an array of Paxos-registers. To write a new value we put it with a position of the current value in a free register and then we fill previous free registers with no-op. When two registers contain two different next values for the same previous value we choose the register with the lowest position in the array.
It is possible and trivial with Multi-Paxos: we just start a new round of the Paxos over a free register. Although plain Paxos doesn't cover it, we can "extend" it and turn into a distributed variable instead of the dist. register. I described this idea and the proof in the "A memo on how Paxos works" post.
Rather than answering your questions directly, I'll try explaining how one might go about implementing a database transaction with Paxos, perhaps that will help clear things up.
The first thing to notice is that there are two "values" in question here. First is the database value, the application-level data that is being modified. Second is the 'Commit'/'Abort' decision. For Paxos-based transactions, the consensus "value" is the 'Commit'/'Abort' decision.
An important point to keep in mind about database transactions with respect to Paxos consensus is that Paxos does not guarantee that all of the peers involved in the transaction will actually see the consensus decision. When this is needed, as it usually is with databases, it's left to the application to ensure that this happens. This means that the state stored by some peers can lag behind others and any database application built on top of Paxos will need some mechanism for handling this. This can be very complicated and is all application-specific so I'm going to ignore all that completely and focus on ensuring that a simple majority of all database replicas agree on the value of revision 2 of the database key FOO which, of course, is initially set to BAR.
The first step is to send the new value for FOO, lets say that's BAZ, and it's expected current revision, 1, along with the Paxos Prepare message. When the database replicas receive this message, they'll first look up their local copy of FOO and check to see if the current revision matches the expected revision included along with the Prepare message. If they match the database replica will bundle a "Vote Commit" flag along with it's Promise message sent in response to the Prepare. If they don't match "Vote Abort" will be sent instead (the revision check protects against the case where the value was modified since the last time the application read it. Allowing overwrites in this case could corrupt application state).
Once the transaction driver receives a quorum of Promise messages along with their associated "Vote Commit"/"Vote Abort" values, it must chose to propose either "Commit" or "Abort". The first step in choosing this value is to follow the Paxos requirement of checking the Prepare messages to see if any database replicant (the Acceptor in Paxos terms) has already accepted a "Commit"/"Abort" decision. If any of them has, then the transaction driver must choose the "Commit"/"Abort" value associated with the highest previously accepted proposal ID. If they haven't it must decide on it's own. This is done by looking at the "Vote Commit"/"Vote Abort" values bundled with the Promises. If a quorum of "Vote Commmit"s are present, the transaction driver may propose "Commit", otherwise it must propose "Abort".
From that point on, it's all standard Paxos messages that get exchanged back and forth until consensus is reached on the 'Commit'/'Abort' decision. Assuming 'Commit' is chosen, the database replicants will update the value and revision associated with FOO to BAZ and 2, respectively.
I wrote a long blog with links to sourcecode on the topic of doing transaction log replication with paxos as described in the Paxos Made Simple paper. Here I give short answers to your questions. The blog post and sourcecode shows the complete picture.
On one hand, I understand it does so to maintain the consensus. But on
the other hand, I had confusion about what the value actually is -
what's the point of "having to send accepters the value that has been
accepted before"?
The value is the command the client is trying to run on the cluster. During an outage the client value transmitted to all nodes by the last leader may have only reached one node in the surviving majority. The new leader may not be that node. The new leader discovers the client value from at least one surviving node and then it transmits it to all the nodes in the surviving majority. In this manner, the new leader collaborates with the dead leader to complete any client work it may have had in progress.
In the context of database transactions, what if it needs to commit a
new value? Does it need to start a new instance of Paxos?
It cannot choose any new commands from clients until it has rebuilt the history of the chosen values selected by the last leader. The blog post talks about this as a "leader takeover phase" where after a crash of the old leader the new leader is trying to bring all nodes fully up to date.
In effect whatever the last leader transmitted which got to a majority of nodes is chosen; the new leader cannot change this history. During the takeover phase it is simply synchronising nodes to get them all up to date. Only when the new leader had finished this phase is it known to be fully up to date with all chosen values can it process any new client commands (i.e. process any new work).
If the answer to the above question is "Yes", then how does accepters
reset the state?
They don't. There is a gap between a value being chosen and any node learning that the value had been chosen. In the context of a database you cannot "commit" the value (apply it to the data store) until you have "learnt" the chosen value. Paxos ensures that a chosen value won't ever be undone. So don't commit the value until you learn that the value has been chosen. The blog post gives more details on this.
This is from my experience of implementing raft and reading the ZAB paper. Which are the two prevalent incarnations of PAXOS. I haven't really gotten into simple paxos or multipaxos.
When a client sends a commit to any node in the cluster, it will redirect that commit to the leader the leader then sends the commit message to each node in the quorum, when all of the nodes confirm the commit it will commit to it's own log.

What to do if the leader fails in Multi-Paxos for master-slave systems?

Backgound:
In section 3, named Implementing a State Machine, of Lamport's paper Paxos Made Simple, Multi-Paxos is described. Multi-Paxos is used in Google Paxos Made Live. (Multi-Paxos is used in Apache ZooKeeper). In Multi-Paxos, gaps can appear:
In general, suppose a leader can get α commands ahead--that is, it can propose commands i + 1 through i + α commands after commands 1 through i are chosen. A gap of up to α - 1 commands could then arise.
Now consider the following scenario:
The whole system uses master-slave architecture. Only the master serves client commands. Master and slaves reach consensus on the sequence of commands via Multi-Paxos. The master is the leader in Multi-Paxos instances. Assume now the master and two of its slaves have the states (commands have been chosen) shown in the following figure:
.
Note that, there are more than one gaps in the master state. Due to asynchrony, the two slaves lag behind. At this time, the master fails.
Problem:
What should the slaves do after they have detected the failure of the master (for example, by heartbeat mechanism)?
In particular, how to handle with the gaps and the missing commands with respect to that of the old master?
Update about Zab:
As #sbridges has pointed out, ZooKeeper uses Zab instead of Paxos. To quote,
Zab is primarily designed for primary-backup (i.e., master-slave) systems, like ZooKeeper, rather than for state machine replication.
It seems that Zab is closely related to my problems listed above. According to the short overview paper of Zab, Zab protocol consists of two modes: recovery and broadcast. In recovery mode, two specific guarantees are made: never forgetting committed messages and letting go of messages that are skipped. My confusion about Zab is:
In recovery mode does Zab also suffer from the gaps problem? If so, what does Zab do?
The gap should be the Paxos instances that has not reached agreement. In the paper Paxos Made Simple, the gap is filled by proposing a special “no-op” command that leaves the state unchanged.
If you cares about the order of chosen values for Paxos instances, you'd better use Zab instead, because Paxos does not preserve causal order. https://cwiki.apache.org/confluence/display/ZOOKEEPER/PaxosRun
The missing command should be the Paxos instances that has reached agreement, but not learned by learner. The value is immutable because it has been accepted a quorum of acceptor. When you run a paxos instance of this instance id, the value will be chosen and recovered to the same one on phase 1b.
When slaves/followers detected a failure on Leader, or the Leader lost a quorum support of slaves/follower, they should elect a new leader.
In zookeeper, there should be no gaps as the follower communicates with leader by TCP which keeps FIFO.
In recovery mode, after the leader is elected, the follower synchronize with leader first, and apply the modification on state until NEWLEADER is received.
In broadcast mode, the follower queues the PROPOSAL in pendingTxns, and wait the COMMIT in the same order. If the zxid of COMMIT mismatch with the zxid of head of pendingTxns, the follower will exit.
https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zab1.0
Multi-Paxos is used in Apache ZooKeeper
Zookeeper uses zab, not paxos. See this link for the difference.
In particular, each zookeeper node in an ensemble commits updates in the same order as every other nodes,
Unlike client requests, state updates must be applied in the exact
original generation order of the primary, starting from the original
initial state of the primary. If a primary fails, a new primary that
executes recovery cannot arbitrarily reorder uncommitted state
updates, or apply them starting from a different initial state.
Specifically the ZAB paper says that a newly elected leader undertakes discovery to learn the next epoch number to set and who has the most up-to-date commit history. The follower sands an ACK-E message which states the max contiguous zxid it has seen. It then says that it undertakes a synchronisation phase where it transmits the state which followers which they have missed. It notes that in interesting optimisation is to only elect a leader which has a most up to date commit history.
With Paxos you don't have to allow gaps. If you do allow gaps then the paper Paxos Made Simple explains how to resolve them from page 9. A new leader knows the last committed value it saw and possibly some committed values above. It probes the slots from the lowest gap it knows about by running phase 1 to propose to those slots. If there are values in those slots it runs phase 2 to fix those values but if it is free to set a value it sets no-op value. Eventually it gets to the slot number where there have been no values proposed and it runs as normal.
In answer to your questions:
What should the slaves do after they have detected the failure of the master (for example, by heartbeat mechanism)?
They should attempt to lead after a randomised delay to try to reduce the risk of two candidates proposing at the same time which would waste messages and disk flushes as only one can lead. Randomised leader timeout is well covered in the Raft paper; the same approach can be used for Paxos.
In particular, how to handle with the gaps and the missing commands with respect to that of the old master?
The new leader should probe and fix the gaps to either the highest value proposed to that slot else a no-op until it has filled in the gaps then it can lead as normal.
The answer of #Hailin explains the gap problem as follows:
In zookeeper, there should be no gaps as the follower communicates with leader by TCP which keeps FIFO"
To supplement:
In the paper A simple totally ordered broadcast protocol, it mentions that ZooKeeper requires the prefix property:
If $m$ is the last message delivered for a leader $L$, any message proposed before $m$ by $L$ must also be delivered".
This property mainly relies on the TCP mechanism used in Zab. In Zab Wiki, it mentions that the implementation of Zab must follow the following assumption (besides others):
Servers must process packets in the order that they are received. Since TCP maintains ordering when sending packets, this means that packets will be processed in the order defined by the sender.

In Paxos, can an Acceptor accept a different value after it has already accepted one?

In Multi-Paxos algorithm, consider this message flow from the viewpoint of an acceptor:
receive: Prepare(N)
reply: Promise(N, null)
receive: Accept!(N, V1)
reply: Accepted(N, V1)
receive: Accept!(N+1, V2)
reply: ?
What should the acceptor's reaction be in this case, according to the protocol? Should it reply with Accepted(N+1, V2), or should it ignore the second Accept!?
I believe this case may happen in Multi-Paxos when a second proposer comes online and believes he is (and always was) leader, therefore he sends his Accept without Preparing. Or if his Prepare simply did not reach the acceptor. If this case may not happen, can you explain why?
I disagree with both other answers.
Multi-Paxos does not say that the Leader is the only proposer; this would cause the system to have a single point of failure. Even during network partitions the system may not be able to progress. Multi-Paxos is an optimization allowing a single node (the Leader) to skip some prepare phases. Other nodes, thinking the leader is dead, may attempt to continue the instance on her behalf, but must still use the full Basic-Paxos protocol.
Nacking the accept message violates the Paxos algorithm. An acceptor should accept all values unless it has promised to not accept it. (Ignoring is allowed but not recommended; it's just because dropped messages are allowed.)
There is also an elegant solution to this. The problem is with the Leader's round number (N+1 in the question).
Here are some assumptions:
You have a scheme such that round ids are disjoint across all nodes (required by Paxos anyway).
You have a way of determining which node is the Leader per Paxos instance (required by Multi-Paxos). The Leader is able to change from one Paxos instance to the next.
Aside: The Part-time Parliament suggests this is done by the Leader winning a prior Paxos instance (Section 3.1) and points out she can stay Leader as long as she's alive or the richest (Section 3.3.1). I have an explicit ELECT_NEW_LEADER:<node> value that is proposed via Paxos.
The Leader only skips the Prepare phase on the initial round per instance; and uses full Basic Paxos on subsequent rounds.
With these assumptions, solution is very simple. The Leader merely picks a really low round id for it's initial Accept phase. This id (which I'll call it INITIAL_ROUND_ID) can be anything as long as it is lower than all the nodes' round ids. Depending on your id-choosing scheme, either -1, 0, or Integer.MIN_VALUE will work.
It works because another node (I'll call him the Stewart) has to go through the full Paxos protocol to propose anything and his round id is always greater than INITIAL_ROUND_ID. There are two cases to consider: whether or not the Leader's Accept messages reached any of the nodes the Stewart's Prepare message did.
When the Leader's Accept phase has not reaches any nodes, the Stewart will get back no value in any Promise, and can proceed just like in regular Basic-Paxos.
And, when the Leader's Accept phase has reached a node, the Stewart will get back a value in a promise, which it uses to continue the algorithm, just like in Basic-Paxos.
In either case, because the Stewart's round id is greater than INITIAL_ROUND_ID, any slow Accept messages a node receives from the Leader will always result in a Nack.
There is no special logic on either the Acceptor or the Stewart. And minimal special logic on the Leader (Viz. pick a really low INITIAL_ROUND_ID).
Notice, if we change the OP's question by one character then the OP's self answer is correct: Nack.
receive: Prepare(N)
reply: Promise(N, null)
receive: Accept!(N, V1)
reply: Accepted(N, V1)
receive: Accept!(N-1, V2)
reply: Nack(N, V1)
But as it stands, his answer breaks the Paxos Algorithm; it should be Accept!
receive: Prepare(N)
reply: Promise(N, null)
receive: Accept!(N, V1)
reply: Accepted(N, V1)
receive: Accept!(N+1, V2)
reply: Accept!(N+1, V2)
The correctness of Multi-Paxos is conditioned on the requirement that the leader (i.e., proposer) does not change between successive Paxos instances. From The Part-Time Parliament Section 3.1 (The Protocol of the Multi-Decree Parliament):
Logically, the parliamentary protocol [a.k.a. Multi-Paxos] used a
separate instance of the complete Synod protocol [a.k.a. Paxos] for each decree number. However,
a single president [a.k.a. proposer/leader] was selected for all these instances, and he performed the first
two steps of the protocol just once.[Added emphasis is mine.]
Therefore, Multi-Paxos makes the assumption that the case which you describe—when a second proposer comes online and believes he is (and always was) leader—will never happen. If such a case can happen, then one should not use Multi-Paxos. With respect to the second possibility—when the second proposer's Prepare did not reach the acceptor—the fact that the second proposer already sent out an Accept! means that it previously sent out a Prepare that was Promised by a quorum of acceptors. Since the acceptors already promised to the first proposer on round N, then the second proposer's Prepare must have been sent out prior to round N. Therefore, the final Accept!(N+1,V2) must have a counter less than N.
Edit: It should also be noted that this version of the protocol is not robust to Byzantine failure:
[The Paxon Parliament's protocol] does not tolerate arbitrary, malicious failures, nor does it guarantee bounded-time response.—The Part-Time Parliament, Section 4.1
Perhaps a simpler answer is to observe that this is the case when the Prepare(N+1) command was accepted by a majority that did not include the acceptor in question.
To elaborate: Once a leader knows that some majority has Promised(N+1), it then sends Accept(N+1,x) to all acceptors, and even if some other majority of acceptors reply with Accepted(N+1) then consensus has been reached.
This is not that unusual a scenario.
(Answering my own question.)
My current understanding is that I should not to accept the value in N+1 (i.e. not answer at all or send a NACK), thus forcing the leader to possibly start another round with Prepare (if the majority hasn't reached consensus yet). After I receive Prepare(N+2), I shall reply with Promise(N+2, V1) and continue as usual.