In 2PC what happens if coordinator asks 3 participants to commit and the second one fails with no response to the coordinator.
A client arrives asks the second node for the value, the second node has just come up but did not manage to commit so it returns an old value... Is that a fault of 2PC?
The missing part of 2PC - 2PR(2 Phases Read)
If any of the commit messages lost or doesn't take effect for some reason at some participants, the resource remains at prepared state(which is uncertain), even after a restart, because prepared state must be persisted in non-volatile storage before the coordinator can ever send a commit message.
Any one tries to read any uncertain resource, must refer to the coordinator to determine the exact state of that resource. Once determined, you can choose the right version of value.
For your case, the second node returns the new value(with the help of coordinator to find out new value is really committed, and old value is stale).
---------- edit --------------
Some implementations use Exclusive Lock during prepare phase, which means, once prepared, no other can read or write the prepared resource. So, before participant committed, any one tries to read, must wait.
If the coordinator is asking them to commit, then it means that all participants have already answered that they are prepared to commit. Prepared means that the participant is guaranteed to be able to commit. There is no failure. If the node vanished in a meteor strike, then the node is restored from the HA/DR data and the restored mode resumes the transaction and proceeds with the commit.
Participants in 2PC are durable, persisted coordinators capable of backup and restore. In theory in the case when one of the participants cannot be restored, then every participant, and the coordinators, are all restored back in time before the last coordinated transaction. In practice, all coordinators support enforcing cases when a participant is lost and the transaction will be manually forced into one state or another, see Resolve Transactions Manually or Resolving indoubt transactions manually.
Related
I have question about this paragraph
"Initially, all transactions are local. If a non-XA data source connection is the first resource connection enlisted in a transaction scope, it will become a global transaction when a (second) XA data source connection joins it. If a second non-XA data source connection attempts to join, an exception is thrown." -> link https://docs.oracle.com/cd/E19229-01/819-1644/detrans.html (Global and Local TRansaction).
Can I have the first connection non XA and the second XA? So the first become xa without any Exception thrown? (I'm in doubt)
Can I have fist transaction marked xa, second marked xa and third non xa? (I suppose no)
what happens if the first ejb trans-type=required use XA on db and call a remote EJB trans-type=required(deployed in another app server) with a db non-xa? Could I have in this moment two distinct transaction so that xa is not the right choice? What happens if two ejb are in the same server but in two distinct ear?
"In scenarios where there is only a single one-phase commit resource provider that participates in the transaction and where all the two-phase commit resource-providers that participate in the transaction are used in a read-only fashion. In this case, the two-phase commit resources all vote read-only during the prepare phase of two-phase commit. Because the one-phase commit resource provider is the only provider to complete any updates, the one-phase commit resource does not have to be prepared."
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.base.doc/ae/cjta_trans.html
What mean for readonly ? So we can mix xa updates with readonly non xa?
Some of these should really be split out into separate questions. I can answer the first couple of questions.
Can I have the first connection non XA and the second XA?
Yes, if you are willing to use Last Participant Support
So the first become xa without any Exception thrown?
No, the transaction manager cannot convert a non-xa capable connection into one that is xa capable. A normal non-xa commit or rollback will be performed on the connection, but it still participates in the transaction alongside the XA resources. I'll discuss how this is done further down in summarizing the Last Participant Support optimization.
Can I have fist transaction marked xa, second marked xa and third non xa?
I assume you meant to say first connection marked xa, and so forth. Yes, you can do this relying on Last Participant Support
What mean for readonly ?
read-only refers to usage of the transactional resource in a way that does not modify any data. For example, you might run a query that locks a row in a database and reads data from it, but does not perform any updates on it.
So we can mix xa updates with readonly non xa?
You have this in reverse. The document that you cited indicates that the XA resources can be read only and the non-xa resource can make updates. This works because the XA resources have a spec-defined way of indicating to the transaction manager that they did not modify any data (by voting XA_RDONLY in their response to the xa.prepare request). Because they haven't written any data, they only need to release their locks, so the commit of the overall transaction just reduces to non-xa commit/rollback of the one-phase resource and then either resolution of the xa-capable resources (commit or rollback) would have the same effect.
Last Participant Support
Last Participant Support, mentioned earlier, is a feature of the application server that simulates the participation of a non-xa resource as part of a transaction alongside one or more xa-capable resources. There are some risks involved in relying on this optimization, namely a timing window where the transaction can be left in-doubt, requiring manual intervention to resolve it.
Here is how it works:
You operate on all of the enlisted resources (xa and non-xa) as you normally would, and when you are ready, you invoke the userTransaction.commit operation (or rely on container managed transactions to issue the commit for you). When the transaction manager receives the request to commit, it sees that there is a non-xa resource involved and orders the prepare/commit operations to the backend in a special way. First, it tells all of the xa-capable resources to do xa.prepare, and receives the vote from each of them. If all indicate that they have successfully prepared and would be able to commit, then the transaction manager proceeds to issue a commit to the non-xa resource. If the commit of the non-xa resource succeeds, then the transaction manager commits all of the xa-capable resources. Even if the system goes down at this point, it is written in the recovery log that these resources must commit, and the transaction manager will later find them during a recovery attempt and commit them, with their corresponding records in the back end being locked until that happens. If the commit of the non-xa resource fails, then the transaction manager would instead proceed to roll back all of the xa-capable resources. The risk here comes from the possibility that the request to commit the non-xa capable resources might not return at all, leaving the transaction manager no way of knowing whether that resource has committed or rolled back, and thus no way knowing whether to commit or roll back the xa-capable resources, leaving the transaction in-doubt and in need of manual intervention to properly recover. Only enable/rely upon Last Participant Support if you are okay with accepting this risk.
All the samples usually demonstrate some sort of change to reliable collections with CommitAsync() or rollback in case of a failure. My code is using TryRemoveAsync(), so failure is not a concern (will be retried later).
Is there a significant downside to invoking tx.CommitAsync() when no changes to reliable collections where performed?
Whenever you open a Transaction and execute commands against a collection, these commands acquire locks in the TStore(Collection) and are recorded to the transaction temporary dictionary(Change tracking) and also to the transaction logs, the replicator then will forward these changes to the replicas.
Once you execute the tx.CommitAsync() the temporary records are saved to the disk, the transaction is registered in the logs and then replicated to secondary replicas to also commit and save to the disk, and then the locks are released.
If the collection is not modified, the transaction won't have anything to save\replicate and will just close the transaction.
If you don't call tx.CommitAsync() after the operation, the transaction is aborted and any pending operations(if any) are discarded and the abort operation is written to the logs to notify other replicas.
In both cases, Commit and Abort, will generate logs(and replicate them), The only detail I am not sure is if these logs are also generated when no changes are in place, I assume they are. Regarding performance, the act of reading or attempting to change a collection, will acquire locks and need to be released with a commit or abort, I think these are to biggest impact on your code, because they will prevent other threads of modifying it while you not complete the transaction. In this case I wouldn't be too worried committing an empty transaction.
// Create a new Transaction object for this partition
using (ITransaction tx = base.StateManager.CreateTransaction()) {
//modify the collection
await m_dic.AddAsync(tx, key, value, cancellationToken);
// CommitAsync sends Commit record to log & secondary replicas
// After quorum responds, all locks released
await tx.CommitAsync();
} // If CommitAsync not called, this line will Dispose the transaction and discard the changes
You can find most of these details on this documentation
If you really want to go deep on implementation details to answer this question, I suggest you dig the answer in the source code for the replicator here
We have a spring-batch process that uses the bitronix transaction manager. On the first pass of a particular step, we see the expected commit behavior - data is only committed to the target database when the transaction boundary is reached.
However, on the second and subsequent passes, rows are committed as soon as they are written. That is, they do not wait for the commit point.
We have confirmed that the bitronix commit is only called at the expected points.
Has anyone experienced this behavior before? What kind of bug am I looking for?
Java XA is designed in such a way that connections cannot be reused across transactions. Once the transaction is committed, the connection property is changed to autocommit=true, and the connection cannot be used in another transaction until it is returned to the connection pool and retrieved by the XA code again.
I had some confusion about paxos, specifically in the context of database transactions:
In the paper "paxos made simple", it says in the second phase that the proposer needs to choose one of the values with the highest sequence number which one of the acceptors has accepted before (if no such value exists, the proposer is free to choose the original value is proposed).
Questions:
On one hand, I understand it does so to maintain the consensus.
But on the other hand, I had confusion about what the value actually is - what's the point of "having to send accepters the value that has been accepted before"?
In the context of database transactions, what if it needs to commit a new value? Does it need to start a new instance of Paxos?
If the answer to the above question is "Yes", then how does accepters reset the state? (In my understanding, if it doesn't reset the state, the proposer would be forced to send one of the old values that has been accepted before rather than being free to commit whatever the new value is.)
There are different kinds of paxos in the "Paxos made simple" paper. One is Paxos (plain paxos, single-degree paxos, synod), another is Multi-Paxos. From an engineer's point of view, the first is distributed write-once register and the second is distributed append only log.
Answers:
In the context of Paxos, the actual value is the value that was successfully written to the write-once register, it happens when a majority of the acceptors accept value of the same round. In the paper it was shown that the new chosen value always will be the same as previous (if it was chosen). So to get the actual value we should initiate a new round and return the new successfully written value.
In the context of Multi-Paxos the actual value is the latest value added to the log.
With Multi-Paxos we just add a new value to the log. To read the current value we read the log and return the latest version. On the low level Multi-Paxos is an array of Paxos-registers. To write a new value we put it with a position of the current value in a free register and then we fill previous free registers with no-op. When two registers contain two different next values for the same previous value we choose the register with the lowest position in the array.
It is possible and trivial with Multi-Paxos: we just start a new round of the Paxos over a free register. Although plain Paxos doesn't cover it, we can "extend" it and turn into a distributed variable instead of the dist. register. I described this idea and the proof in the "A memo on how Paxos works" post.
Rather than answering your questions directly, I'll try explaining how one might go about implementing a database transaction with Paxos, perhaps that will help clear things up.
The first thing to notice is that there are two "values" in question here. First is the database value, the application-level data that is being modified. Second is the 'Commit'/'Abort' decision. For Paxos-based transactions, the consensus "value" is the 'Commit'/'Abort' decision.
An important point to keep in mind about database transactions with respect to Paxos consensus is that Paxos does not guarantee that all of the peers involved in the transaction will actually see the consensus decision. When this is needed, as it usually is with databases, it's left to the application to ensure that this happens. This means that the state stored by some peers can lag behind others and any database application built on top of Paxos will need some mechanism for handling this. This can be very complicated and is all application-specific so I'm going to ignore all that completely and focus on ensuring that a simple majority of all database replicas agree on the value of revision 2 of the database key FOO which, of course, is initially set to BAR.
The first step is to send the new value for FOO, lets say that's BAZ, and it's expected current revision, 1, along with the Paxos Prepare message. When the database replicas receive this message, they'll first look up their local copy of FOO and check to see if the current revision matches the expected revision included along with the Prepare message. If they match the database replica will bundle a "Vote Commit" flag along with it's Promise message sent in response to the Prepare. If they don't match "Vote Abort" will be sent instead (the revision check protects against the case where the value was modified since the last time the application read it. Allowing overwrites in this case could corrupt application state).
Once the transaction driver receives a quorum of Promise messages along with their associated "Vote Commit"/"Vote Abort" values, it must chose to propose either "Commit" or "Abort". The first step in choosing this value is to follow the Paxos requirement of checking the Prepare messages to see if any database replicant (the Acceptor in Paxos terms) has already accepted a "Commit"/"Abort" decision. If any of them has, then the transaction driver must choose the "Commit"/"Abort" value associated with the highest previously accepted proposal ID. If they haven't it must decide on it's own. This is done by looking at the "Vote Commit"/"Vote Abort" values bundled with the Promises. If a quorum of "Vote Commmit"s are present, the transaction driver may propose "Commit", otherwise it must propose "Abort".
From that point on, it's all standard Paxos messages that get exchanged back and forth until consensus is reached on the 'Commit'/'Abort' decision. Assuming 'Commit' is chosen, the database replicants will update the value and revision associated with FOO to BAZ and 2, respectively.
I wrote a long blog with links to sourcecode on the topic of doing transaction log replication with paxos as described in the Paxos Made Simple paper. Here I give short answers to your questions. The blog post and sourcecode shows the complete picture.
On one hand, I understand it does so to maintain the consensus. But on
the other hand, I had confusion about what the value actually is -
what's the point of "having to send accepters the value that has been
accepted before"?
The value is the command the client is trying to run on the cluster. During an outage the client value transmitted to all nodes by the last leader may have only reached one node in the surviving majority. The new leader may not be that node. The new leader discovers the client value from at least one surviving node and then it transmits it to all the nodes in the surviving majority. In this manner, the new leader collaborates with the dead leader to complete any client work it may have had in progress.
In the context of database transactions, what if it needs to commit a
new value? Does it need to start a new instance of Paxos?
It cannot choose any new commands from clients until it has rebuilt the history of the chosen values selected by the last leader. The blog post talks about this as a "leader takeover phase" where after a crash of the old leader the new leader is trying to bring all nodes fully up to date.
In effect whatever the last leader transmitted which got to a majority of nodes is chosen; the new leader cannot change this history. During the takeover phase it is simply synchronising nodes to get them all up to date. Only when the new leader had finished this phase is it known to be fully up to date with all chosen values can it process any new client commands (i.e. process any new work).
If the answer to the above question is "Yes", then how does accepters
reset the state?
They don't. There is a gap between a value being chosen and any node learning that the value had been chosen. In the context of a database you cannot "commit" the value (apply it to the data store) until you have "learnt" the chosen value. Paxos ensures that a chosen value won't ever be undone. So don't commit the value until you learn that the value has been chosen. The blog post gives more details on this.
This is from my experience of implementing raft and reading the ZAB paper. Which are the two prevalent incarnations of PAXOS. I haven't really gotten into simple paxos or multipaxos.
When a client sends a commit to any node in the cluster, it will redirect that commit to the leader the leader then sends the commit message to each node in the quorum, when all of the nodes confirm the commit it will commit to it's own log.
I am trying to implement transactions for distributed services in java over REST. I have some questions to ask.
What happens when resources reply affirmatively and in phase 2 they fail to commit?
I tried to search but unfortunately I could not find a proper answer to what happens when rollback fails in 2PC protocol. I know that its a blocking protocol and it waits for response for infinite time, but what happens in real world scenario?
what are the other protocols for distributed transaction management?
I read about JTA for transaction implementation, but is there any other implementation which can be used to implement transactions?
Any reply will be helpful. Thanks in advance.
I don't have answers to these questions but I created a specific method for my specific case. So posting here if some one need transactions for the same cases.
Since In my case there is no change to current entries in database (or indexer, which is also running as a service) but there were only new entries in system at different places, so the false failures were not harmful but false success were. So for my particular case I followed following strategy:
i. All the resources adds a transaction id with the row in database. In first phase when coordinator ask resources, all resources makes entries in database with transaction id generated by coordinator.
ii. After phase 1, when all resources reply affirmatively that means resources have made changes to database, coordinator makes an entry in it's own log that transaction is successful and conveys the same to resources. All resources makes the transaction status successful in the row of data inserted.
iii. A service run continuously to search the database and correct the transaction status by asking the status from coordinator. If there is no entry or failure entry, transaction returns failure status, and same is updated on service. When fetching data, if there is an entry in database which has failure label, then it always checks the transaction status with coordinator, if there is no entry of failure it filters the results. Hence those data entries are not supplied for which there is no information or there is failure information. So the outcome is always consistent.
This strategy provides a way for atomicity for my case.