Paxos understanding - distributed-computing

I have read the paper Paxos made simple.
And after hard thinking, I come to this conclusion:
The Paxos protocol always guarantees the majority of servers accept the same value at the same turn of the proposal and so that finally, we can derive the consensus from the majority of servers.
For example, table 1 shows that there are three servers(S1, S2, S3), and the N1, N2... denotes the proposal, there are many gaps in the proposals but for every proposal, we can derive the consensus by the majority servers.
N1: S1(A) & S2(A) & S3(A) => A
N2: S1(B) & S2(null) & S3(B) => B
N3: S1(null) & S2(C) & S3(C) => C
N4: S1(D) & S2(D) & S3(null) => D
So we get consensus as table 2 shown.
Is my understanding right?
Table 1:
N1
N2
N3
N4
S1
A
B
D
S2
A
C
D
S3
A
B
C
Table 2:
N1
N2
N3
N4
A
B
C
D

From what you have said I think you are seeing whether or not Consensus on a sequence of values is obtained from looking at the values accepted in increasing proposals. Follow this, the log is obtained from the proposal numbers ((N1, A), (N2, B), (N3, C), (N4, B)). I think this is the case because you refer to gaps in the sequence of proposals.
If this is the case, then I think there might be some confusion.
In Paxos, there are two things to worry about:
Proposals (or ballots, attempts to choose a value)
Log values (values agreed on by the acceptors and then executed by a state machine)
For the log, there must be fault-tolerant agreement on each value. A value can be executed only if all previous values have also been executed, therefore there must be no gaps in the log if you wish to execute all values. This is a liveness issue though and has nothing to do with safety. You can safely learn of all the values in the log and see them as "committed". If it were a machine executing commands issued by clients, you could tell a client their request for a command will occur but you couldn't tell them the result of executing their command as other commands that you do not know of will be executed before it.
The log uses the basic Paxos protocol to provide the fault-tolerant agreement on a single value. This is where the idea of committing a value is useful. The basic Paxos protocol outlined in Paxos Made Simple allows for agreement on only a single value and it uses a totally ordered sequence of proposals (ballots)to agree on that single value.
The main thing here is that there are can be multiple proposals in deciding on one value to be at a single index in the log.
Following that, there might be gaps in the sequence of proposals promised or accepted by acceptors in Paxos but this is okay and part of the algorithm's operation. The Paxos algorithm ensures that if a value is agreed upon in one proposal, all higher-numbered proposals will see the same value proposed and therefore agreed upon. Please note this is different from the example in Tables 1 and 2 of the question. In that example, if a majority of servers (acceptors) had accepted A in proposal N1, then all later proposals would also accept value N1.
This ability to agree on multiple proposals but with the same is necessary for fault tolerance, in particular for liveness. Imagine if a proposal Nl was accepted by a majority of acceptors, and then all but one acceptor crashed before any notification of acceptances of Nl could be made. A value has been agreed upon but no one knows that. Now, to recover from this, another proposer should issue a higher-numbered proposal. Let's call this proposal Nh.
Making a proposal has two phases: Promise and Accept. In the first phase, the proposer must obtain promises from a majority (or quorum) of acceptors on the proposal Nh. The purpose of this is to determine whether or not a value was (possibly) agreed upon in a lower-numbered proposal. In this case, we will learn that one acceptor had accepted a value in a lower-numbered proposal, Nl. (The other acceptors who responded will report they've accepted nothing.) Despite only one acceptor reporting they've accepted a value in Nl, the proposer must (for agreement-safety reasons) assume it was possibly chosen. In the second phase, then, the proposer will issue acceptance requests to the acceptors for the proposal Nh, and will include the value of the earlier proposal Nl. If at least a majority (quorum) of acceptors accept this, then the proposal is chosen and all things being well, learners will learn that value to the value committed for a single index in the log. Otherwise, the process of issuing proposals for that to decide on the value at the log index will continue again.
Finally, the last point to raise is there is also no requirement that a proposal has a majority of acceptance and this is important because there might not be a majority of acceptors non-faulty at the time a proposal is issued. To motivate this consider if a proper, or too many acceptors fail during a proposal.
Imagine if in an instance of the Paxos algorithm there are there acceptors, and one of them accepts a value in proposal Na. Then before Na can be accepted by a majority, the proposer of Na fails and any messages in transit over the network are lost. In this case, proposers need to be able to make new proposals and see a value chosen. In the scenario, another proposer will begin a higher numbered proposal Nb. It will go through the promise phase as before and determine no value could possibly be chosen after talking to the two acceptors who didn't accept anything. Now the proposer of Nb can propose any value and see it accepted by a majority and agreed upon. Proposing any value here is safe because the proposer of the Na, should it come back online, will not have their value accepted by a majority because a majority of acceptors had promised and accepted on a higher-numbered proposal.
Michael :)

Related

paxos algorithm - how does the propose stage work?

I am looking at the pseudocode for the PROPOSE stage of the paxos algorithm: https://www.cs.rutgers.edu/~pxk/417/notes/paxos.html
did I receive PROMISE responses from a majority of acceptors?
if yes
do any responses contain accepted values (from other proposals)?
if yes
val = accepted_VALUE // value from PROMISE message with the highest accepted ID
if no
val = VALUE // we can use our proposed value
send PROPOSE(ID, val) to at least a majority of acceptors
If one of the peers has previously accepted a value (accepted_VALUE), what happens to the value that the proposer is trying to propose (VALUE)?
As I understand it looking at the pseudocode, it gets discarded? That seems like a loss of information...
The proposer does discard its value when an acceptor responds with a value.
I think of it like this: Paxos is a cooperation protocol, much like a wait-free-lock-free algorithm. The proposer's job is not to ensure its value is chosen, but to help the process along. When a proposer sees that it was beat out by another proposer, it helps to replicate that other proposers.
In a similar vein, you can think of it as the proposer is continuing the work of a prior, potentially dead proposer.
You can see this more clearly in the simpler, non-consensus Attiya/Bar-Noy/Dolev (ABD) protocol. Even when reading in ABD the "proposer" re-writes the value to the peers to ensure the value is distributed across the system.
ABD: "Sharing memory robustly in message-passing systems", H. Attiya, A. Bar-Noy, and D. Dolev, 1995

Commit Failure in Paxos

I am new to the Distributed System and Consensus Algorithm. I understand how it works but I am confused by some corner cases: when the acceptors received an ACCEPT for an instance but never heard back about what the final consensus or decision is, what will the acceptors react. For example, the proposer is robooting or failed during commit or right after it sends all the ACCEPT. What will happen in this case?
Thanks.
There are two parts to this question: How do the acceptors react to new proposals? and How do acceptors react if they never learn the result?
In plain-old paxos, the acceptors never actually need to know the result. In fact it is perfectly reasonable that different acceptors have different values in their memory, never knowing if the value they have is the committed value.
The real point of paxos is to deal with the first question. And seeing that the acceptor never actually knows if it has the committed value, it has to assume that it could have the committed but be open to replacing its value if it doesn't have the committed value. How does it know? When receiving a message the proposer always compares the round number and if that is old then the acceptor signals to the proposer that it has to "catch up" first (a Nack). Otherwise, it trusts that the proposer knows what it is doing.
Now for a word about real systems. Some real paxos systems can get away with the acceptors not caring what the committed value is: Paxos is just there to choose what the value will be. But many real systems use Paxos & Friends to make redundant copies of the data for safekeeping.
Some paxos systems will continue paxos-ing until all the acceptors have the data. (Notice that without interference from other proposers, an extra paxos round copies the committed value everywhere.) Others systems are wary about interference from other proposers and will use a different Committed message that teach the acceptors (and other Learners) what the committed value is.
But what happens if the proposer crashes? A subsequent proposer can come along and propose a no-op value. If the subsequent proposer Prepares (Phase 1A) and can communicate with ANY of the acceptors that the prior proposer successfully sent Accepts to (Phase 2A) then it will know what the prior proposer was trying to do (via the response in Phase 1B: PrepareAck). Otherwise a harmless no-op value gets committed.
when the acceptors received an ACCEPT for an instance but never heard back about what the final consensus or decision is, [how] will the acceptors react.
The node sending the value usually learns its value is fixed by counting positive responses to its ACCEPT messages until it sees a majority. If messages are dropped they can be resent until enough messages get through to determine a majority outcome. The acceptors don't have to do anything but accurately follow the algorithm when repeated messages are sent.
For example, the proposer is robooting or failed during commit or right after it sends all the ACCEPT. What will happen in this case?
Indeed this is an interesting case. A value might be accepted by a majority and so fixed but no-one knows as all scheduled messages have failed to arrive.
The responses to PREPARE messages have the information about the values already accepted. So any node can issue PREPARE messages and learn if a value has been fixed. That is actually the genius of Paxos. Once a value is accepted by a majority if is fixed because any node running the algorithm must keep choosing the same value under all message loss and crash scenarios.
Typically Paxos uses a stable leader who streams ACCEPT messages for successive rounds with successive values. If the leader crashes any node can timeout and attempt to lead by sending PREPARE messages. Multiple nodes issuing PREPARE messages trying to lead can interrupt each other giving live-lock. Yet they can never disagree about what value is fixed once it is fixed. They can only compete to get their own value fixed until enough messages get through to have a winner.
Once again acceptor nodes don’t have to do anything other than follow the algorithm when a new leader takes over from a crashed leader. The invariants of the algorithm mean that no leader will contradict any previous leader as to the fixed value. New leaders collaborate with old leaders and acceptors can simply trust that this is the case. Eventually enough messages will get through for all nodes to learn the result.

Why Paxos is design in two phases

Why Paxos requires two phases(prepare/promise + accept/accepted) instead of a single one? That is, using only prepare/promise portion, if the proposer has heard back from a majority of acceptors, that value is choose.
What is the problem, does it break safety or liveness?
It breaks safety not to follow the full protocol.
Typical implementations of multi-paxos have a steady state mode where a stable leader streams Accept messages containing fresh values. Only when a problem occurs (leader crashes, stalls, or is partitioned by a network issue) does a new leader need to issue prepare messages to ensure saftey. A full description of this is in the write-up of how TRex an open source Paxos library implements Paxos.
Consider the following crash scenario which TRex would handle properly:
Nodes A, B, C with A leading
Client application sends V1 to leader A
Node A is in steady state and so sends accept(n, V1) to nodes B and C. The network starts to fail though so only B sees the message and it replies with accepted(n)
Node A sees the response and has a majority {A,B} so it knows the value is fixed due to the safety proof of the protocol.
Node A attempts to broadcast the outcome to everyone just as it's server dies. Only the client application who issued the V1 gets the message. Imagine that V1 is a customer order and upon learning the order is fixed the client application debts the customer credit card.
Node C times out on the dead leader and attempts to lead. It never saw the value V1. It cannot arbitrarily choose any new value without rolling back the order V1 but the customer has already been charged.
So Node C first issues a prepare(n+1) and node B responds with promise(n+1, V1).
Node C then issues accept(n+1, V1) and as long as the remaining messages get through nodes B and C will learn the value V1 was chosen.
Intuitively we can say that Node C has chosen to collaborate with the dead node A by choosing A's value. So intuitively we can see why there must be two rounds. The first round is needed to discover whether there is any pending work to finish. The second round is used to fix the correct value to give consistency across all processes within the system.
It's not entirely accurate, but you can think of the two phases as 1) copying the data, and then 2) committing the data. If the data is just copied to the other servers, those servers would have no idea if enough other servers have the data for it to be considered safe to serve. Thus there is a second phase to let the servers know that they can commit the data.
Paxos is a little more complex than that, which allows it to continue during failures of either phase. Part of the Paxos proof is that it is the minimal protocol for doing this completely. That is, other protocols do more work, either because they add more capabilities, or because they were poorly designed.

Paxos algorithm in the context of distributed database transaction

I had some confusion about paxos, specifically in the context of database transactions:
In the paper "paxos made simple", it says in the second phase that the proposer needs to choose one of the values with the highest sequence number which one of the acceptors has accepted before (if no such value exists, the proposer is free to choose the original value is proposed).
Questions:
On one hand, I understand it does so to maintain the consensus.
But on the other hand, I had confusion about what the value actually is - what's the point of "having to send accepters the value that has been accepted before"?
In the context of database transactions, what if it needs to commit a new value? Does it need to start a new instance of Paxos?
If the answer to the above question is "Yes", then how does accepters reset the state? (In my understanding, if it doesn't reset the state, the proposer would be forced to send one of the old values that has been accepted before rather than being free to commit whatever the new value is.)
There are different kinds of paxos in the "Paxos made simple" paper. One is Paxos (plain paxos, single-degree paxos, synod), another is Multi-Paxos. From an engineer's point of view, the first is distributed write-once register and the second is distributed append only log.
Answers:
In the context of Paxos, the actual value is the value that was successfully written to the write-once register, it happens when a majority of the acceptors accept value of the same round. In the paper it was shown that the new chosen value always will be the same as previous (if it was chosen). So to get the actual value we should initiate a new round and return the new successfully written value.
In the context of Multi-Paxos the actual value is the latest value added to the log.
With Multi-Paxos we just add a new value to the log. To read the current value we read the log and return the latest version. On the low level Multi-Paxos is an array of Paxos-registers. To write a new value we put it with a position of the current value in a free register and then we fill previous free registers with no-op. When two registers contain two different next values for the same previous value we choose the register with the lowest position in the array.
It is possible and trivial with Multi-Paxos: we just start a new round of the Paxos over a free register. Although plain Paxos doesn't cover it, we can "extend" it and turn into a distributed variable instead of the dist. register. I described this idea and the proof in the "A memo on how Paxos works" post.
Rather than answering your questions directly, I'll try explaining how one might go about implementing a database transaction with Paxos, perhaps that will help clear things up.
The first thing to notice is that there are two "values" in question here. First is the database value, the application-level data that is being modified. Second is the 'Commit'/'Abort' decision. For Paxos-based transactions, the consensus "value" is the 'Commit'/'Abort' decision.
An important point to keep in mind about database transactions with respect to Paxos consensus is that Paxos does not guarantee that all of the peers involved in the transaction will actually see the consensus decision. When this is needed, as it usually is with databases, it's left to the application to ensure that this happens. This means that the state stored by some peers can lag behind others and any database application built on top of Paxos will need some mechanism for handling this. This can be very complicated and is all application-specific so I'm going to ignore all that completely and focus on ensuring that a simple majority of all database replicas agree on the value of revision 2 of the database key FOO which, of course, is initially set to BAR.
The first step is to send the new value for FOO, lets say that's BAZ, and it's expected current revision, 1, along with the Paxos Prepare message. When the database replicas receive this message, they'll first look up their local copy of FOO and check to see if the current revision matches the expected revision included along with the Prepare message. If they match the database replica will bundle a "Vote Commit" flag along with it's Promise message sent in response to the Prepare. If they don't match "Vote Abort" will be sent instead (the revision check protects against the case where the value was modified since the last time the application read it. Allowing overwrites in this case could corrupt application state).
Once the transaction driver receives a quorum of Promise messages along with their associated "Vote Commit"/"Vote Abort" values, it must chose to propose either "Commit" or "Abort". The first step in choosing this value is to follow the Paxos requirement of checking the Prepare messages to see if any database replicant (the Acceptor in Paxos terms) has already accepted a "Commit"/"Abort" decision. If any of them has, then the transaction driver must choose the "Commit"/"Abort" value associated with the highest previously accepted proposal ID. If they haven't it must decide on it's own. This is done by looking at the "Vote Commit"/"Vote Abort" values bundled with the Promises. If a quorum of "Vote Commmit"s are present, the transaction driver may propose "Commit", otherwise it must propose "Abort".
From that point on, it's all standard Paxos messages that get exchanged back and forth until consensus is reached on the 'Commit'/'Abort' decision. Assuming 'Commit' is chosen, the database replicants will update the value and revision associated with FOO to BAZ and 2, respectively.
I wrote a long blog with links to sourcecode on the topic of doing transaction log replication with paxos as described in the Paxos Made Simple paper. Here I give short answers to your questions. The blog post and sourcecode shows the complete picture.
On one hand, I understand it does so to maintain the consensus. But on
the other hand, I had confusion about what the value actually is -
what's the point of "having to send accepters the value that has been
accepted before"?
The value is the command the client is trying to run on the cluster. During an outage the client value transmitted to all nodes by the last leader may have only reached one node in the surviving majority. The new leader may not be that node. The new leader discovers the client value from at least one surviving node and then it transmits it to all the nodes in the surviving majority. In this manner, the new leader collaborates with the dead leader to complete any client work it may have had in progress.
In the context of database transactions, what if it needs to commit a
new value? Does it need to start a new instance of Paxos?
It cannot choose any new commands from clients until it has rebuilt the history of the chosen values selected by the last leader. The blog post talks about this as a "leader takeover phase" where after a crash of the old leader the new leader is trying to bring all nodes fully up to date.
In effect whatever the last leader transmitted which got to a majority of nodes is chosen; the new leader cannot change this history. During the takeover phase it is simply synchronising nodes to get them all up to date. Only when the new leader had finished this phase is it known to be fully up to date with all chosen values can it process any new client commands (i.e. process any new work).
If the answer to the above question is "Yes", then how does accepters
reset the state?
They don't. There is a gap between a value being chosen and any node learning that the value had been chosen. In the context of a database you cannot "commit" the value (apply it to the data store) until you have "learnt" the chosen value. Paxos ensures that a chosen value won't ever be undone. So don't commit the value until you learn that the value has been chosen. The blog post gives more details on this.
This is from my experience of implementing raft and reading the ZAB paper. Which are the two prevalent incarnations of PAXOS. I haven't really gotten into simple paxos or multipaxos.
When a client sends a commit to any node in the cluster, it will redirect that commit to the leader the leader then sends the commit message to each node in the quorum, when all of the nodes confirm the commit it will commit to it's own log.

In Paxos, can an Acceptor accept a different value after it has already accepted one?

In Multi-Paxos algorithm, consider this message flow from the viewpoint of an acceptor:
receive: Prepare(N)
reply: Promise(N, null)
receive: Accept!(N, V1)
reply: Accepted(N, V1)
receive: Accept!(N+1, V2)
reply: ?
What should the acceptor's reaction be in this case, according to the protocol? Should it reply with Accepted(N+1, V2), or should it ignore the second Accept!?
I believe this case may happen in Multi-Paxos when a second proposer comes online and believes he is (and always was) leader, therefore he sends his Accept without Preparing. Or if his Prepare simply did not reach the acceptor. If this case may not happen, can you explain why?
I disagree with both other answers.
Multi-Paxos does not say that the Leader is the only proposer; this would cause the system to have a single point of failure. Even during network partitions the system may not be able to progress. Multi-Paxos is an optimization allowing a single node (the Leader) to skip some prepare phases. Other nodes, thinking the leader is dead, may attempt to continue the instance on her behalf, but must still use the full Basic-Paxos protocol.
Nacking the accept message violates the Paxos algorithm. An acceptor should accept all values unless it has promised to not accept it. (Ignoring is allowed but not recommended; it's just because dropped messages are allowed.)
There is also an elegant solution to this. The problem is with the Leader's round number (N+1 in the question).
Here are some assumptions:
You have a scheme such that round ids are disjoint across all nodes (required by Paxos anyway).
You have a way of determining which node is the Leader per Paxos instance (required by Multi-Paxos). The Leader is able to change from one Paxos instance to the next.
Aside: The Part-time Parliament suggests this is done by the Leader winning a prior Paxos instance (Section 3.1) and points out she can stay Leader as long as she's alive or the richest (Section 3.3.1). I have an explicit ELECT_NEW_LEADER:<node> value that is proposed via Paxos.
The Leader only skips the Prepare phase on the initial round per instance; and uses full Basic Paxos on subsequent rounds.
With these assumptions, solution is very simple. The Leader merely picks a really low round id for it's initial Accept phase. This id (which I'll call it INITIAL_ROUND_ID) can be anything as long as it is lower than all the nodes' round ids. Depending on your id-choosing scheme, either -1, 0, or Integer.MIN_VALUE will work.
It works because another node (I'll call him the Stewart) has to go through the full Paxos protocol to propose anything and his round id is always greater than INITIAL_ROUND_ID. There are two cases to consider: whether or not the Leader's Accept messages reached any of the nodes the Stewart's Prepare message did.
When the Leader's Accept phase has not reaches any nodes, the Stewart will get back no value in any Promise, and can proceed just like in regular Basic-Paxos.
And, when the Leader's Accept phase has reached a node, the Stewart will get back a value in a promise, which it uses to continue the algorithm, just like in Basic-Paxos.
In either case, because the Stewart's round id is greater than INITIAL_ROUND_ID, any slow Accept messages a node receives from the Leader will always result in a Nack.
There is no special logic on either the Acceptor or the Stewart. And minimal special logic on the Leader (Viz. pick a really low INITIAL_ROUND_ID).
Notice, if we change the OP's question by one character then the OP's self answer is correct: Nack.
receive: Prepare(N)
reply: Promise(N, null)
receive: Accept!(N, V1)
reply: Accepted(N, V1)
receive: Accept!(N-1, V2)
reply: Nack(N, V1)
But as it stands, his answer breaks the Paxos Algorithm; it should be Accept!
receive: Prepare(N)
reply: Promise(N, null)
receive: Accept!(N, V1)
reply: Accepted(N, V1)
receive: Accept!(N+1, V2)
reply: Accept!(N+1, V2)
The correctness of Multi-Paxos is conditioned on the requirement that the leader (i.e., proposer) does not change between successive Paxos instances. From The Part-Time Parliament Section 3.1 (The Protocol of the Multi-Decree Parliament):
Logically, the parliamentary protocol [a.k.a. Multi-Paxos] used a
separate instance of the complete Synod protocol [a.k.a. Paxos] for each decree number. However,
a single president [a.k.a. proposer/leader] was selected for all these instances, and he performed the first
two steps of the protocol just once.[Added emphasis is mine.]
Therefore, Multi-Paxos makes the assumption that the case which you describe—when a second proposer comes online and believes he is (and always was) leader—will never happen. If such a case can happen, then one should not use Multi-Paxos. With respect to the second possibility—when the second proposer's Prepare did not reach the acceptor—the fact that the second proposer already sent out an Accept! means that it previously sent out a Prepare that was Promised by a quorum of acceptors. Since the acceptors already promised to the first proposer on round N, then the second proposer's Prepare must have been sent out prior to round N. Therefore, the final Accept!(N+1,V2) must have a counter less than N.
Edit: It should also be noted that this version of the protocol is not robust to Byzantine failure:
[The Paxon Parliament's protocol] does not tolerate arbitrary, malicious failures, nor does it guarantee bounded-time response.—The Part-Time Parliament, Section 4.1
Perhaps a simpler answer is to observe that this is the case when the Prepare(N+1) command was accepted by a majority that did not include the acceptor in question.
To elaborate: Once a leader knows that some majority has Promised(N+1), it then sends Accept(N+1,x) to all acceptors, and even if some other majority of acceptors reply with Accepted(N+1) then consensus has been reached.
This is not that unusual a scenario.
(Answering my own question.)
My current understanding is that I should not to accept the value in N+1 (i.e. not answer at all or send a NACK), thus forcing the leader to possibly start another round with Prepare (if the majority hasn't reached consensus yet). After I receive Prepare(N+2), I shall reply with Promise(N+2, V1) and continue as usual.