Referencing this article about performing two-phase commits with MongoDB:
What is the purpose of the "initial" state of the transaction? Why wouldn't you just insert your transaction document with a "pending" state and save a round trip to the database?
If there are more than one applications running the same transaction, as explained later in the article, "initial" state basically means that logical lock is free, so findAndModify() could update the document to get that lock. Thus only one application is allowed to run the transaction.
There is actually a good reason for the initial state: The 2PC protocol defines a state machine with an 'initial' state, which transitions to 'pending'. The transaction should only be in 'pending' state once all participants are in pending state. In addition the transition to 'commit' should only be started when the transaction is in 'pending' state. Finally the transaction should only be marked as 'committed' once all participants have successfully committed.
If you started the transition to commit without fully reaching the pending state, you risk breaking the atomicity and isolation guarantees.
In the article referenced the transaction state is changed to pending before setting the participants to pending (inserting the records). The transaction state is set to committed before committing the participants. And there is an extra state called 'done', this might lead to bugs in the coordinator implementation. I suggest you take the article with a grain of salt.
Related
I have question about this paragraph
"Initially, all transactions are local. If a non-XA data source connection is the first resource connection enlisted in a transaction scope, it will become a global transaction when a (second) XA data source connection joins it. If a second non-XA data source connection attempts to join, an exception is thrown." -> link https://docs.oracle.com/cd/E19229-01/819-1644/detrans.html (Global and Local TRansaction).
Can I have the first connection non XA and the second XA? So the first become xa without any Exception thrown? (I'm in doubt)
Can I have fist transaction marked xa, second marked xa and third non xa? (I suppose no)
what happens if the first ejb trans-type=required use XA on db and call a remote EJB trans-type=required(deployed in another app server) with a db non-xa? Could I have in this moment two distinct transaction so that xa is not the right choice? What happens if two ejb are in the same server but in two distinct ear?
"In scenarios where there is only a single one-phase commit resource provider that participates in the transaction and where all the two-phase commit resource-providers that participate in the transaction are used in a read-only fashion. In this case, the two-phase commit resources all vote read-only during the prepare phase of two-phase commit. Because the one-phase commit resource provider is the only provider to complete any updates, the one-phase commit resource does not have to be prepared."
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.base.doc/ae/cjta_trans.html
What mean for readonly ? So we can mix xa updates with readonly non xa?
Some of these should really be split out into separate questions. I can answer the first couple of questions.
Can I have the first connection non XA and the second XA?
Yes, if you are willing to use Last Participant Support
So the first become xa without any Exception thrown?
No, the transaction manager cannot convert a non-xa capable connection into one that is xa capable. A normal non-xa commit or rollback will be performed on the connection, but it still participates in the transaction alongside the XA resources. I'll discuss how this is done further down in summarizing the Last Participant Support optimization.
Can I have fist transaction marked xa, second marked xa and third non xa?
I assume you meant to say first connection marked xa, and so forth. Yes, you can do this relying on Last Participant Support
What mean for readonly ?
read-only refers to usage of the transactional resource in a way that does not modify any data. For example, you might run a query that locks a row in a database and reads data from it, but does not perform any updates on it.
So we can mix xa updates with readonly non xa?
You have this in reverse. The document that you cited indicates that the XA resources can be read only and the non-xa resource can make updates. This works because the XA resources have a spec-defined way of indicating to the transaction manager that they did not modify any data (by voting XA_RDONLY in their response to the xa.prepare request). Because they haven't written any data, they only need to release their locks, so the commit of the overall transaction just reduces to non-xa commit/rollback of the one-phase resource and then either resolution of the xa-capable resources (commit or rollback) would have the same effect.
Last Participant Support
Last Participant Support, mentioned earlier, is a feature of the application server that simulates the participation of a non-xa resource as part of a transaction alongside one or more xa-capable resources. There are some risks involved in relying on this optimization, namely a timing window where the transaction can be left in-doubt, requiring manual intervention to resolve it.
Here is how it works:
You operate on all of the enlisted resources (xa and non-xa) as you normally would, and when you are ready, you invoke the userTransaction.commit operation (or rely on container managed transactions to issue the commit for you). When the transaction manager receives the request to commit, it sees that there is a non-xa resource involved and orders the prepare/commit operations to the backend in a special way. First, it tells all of the xa-capable resources to do xa.prepare, and receives the vote from each of them. If all indicate that they have successfully prepared and would be able to commit, then the transaction manager proceeds to issue a commit to the non-xa resource. If the commit of the non-xa resource succeeds, then the transaction manager commits all of the xa-capable resources. Even if the system goes down at this point, it is written in the recovery log that these resources must commit, and the transaction manager will later find them during a recovery attempt and commit them, with their corresponding records in the back end being locked until that happens. If the commit of the non-xa resource fails, then the transaction manager would instead proceed to roll back all of the xa-capable resources. The risk here comes from the possibility that the request to commit the non-xa capable resources might not return at all, leaving the transaction manager no way of knowing whether that resource has committed or rolled back, and thus no way knowing whether to commit or roll back the xa-capable resources, leaving the transaction in-doubt and in need of manual intervention to properly recover. Only enable/rely upon Last Participant Support if you are okay with accepting this risk.
I'm trying to get a better understanding of the lock acquisition behavior on MongoDB transactions. I have a scenario where two concurrent transactions try to modify the same document. Since one transaction will get the write lock on the document first, the second transaction will run into a write conflict and fail.
I stumbled upon the maxTransactionLockRequestTimeoutMillis setting as documented here: https://docs.mongodb.com/manual/reference/parameters/#param.maxTransactionLockRequestTimeoutMillis and it states:
The maximum amount of time in milliseconds that multi-document transactions should wait to acquire locks required by the operations in the transaction.
However, changing this value does not seem to have an impact on the observed behavior with a write conflict. Transaction 2 does not seem to wait for the lock to be released again but immediately runs into a write conflict when another transaction holds the lock (other than concurrent writes outside a transaction which will block and wait for the lock).
Do I understand correctly that the configured time in maxTransactionLockRequestTimeoutMillis does not include the act of actually receiving the write lock on the document or is there something wrong with my tests?
In 2PC what happens if coordinator asks 3 participants to commit and the second one fails with no response to the coordinator.
A client arrives asks the second node for the value, the second node has just come up but did not manage to commit so it returns an old value... Is that a fault of 2PC?
The missing part of 2PC - 2PR(2 Phases Read)
If any of the commit messages lost or doesn't take effect for some reason at some participants, the resource remains at prepared state(which is uncertain), even after a restart, because prepared state must be persisted in non-volatile storage before the coordinator can ever send a commit message.
Any one tries to read any uncertain resource, must refer to the coordinator to determine the exact state of that resource. Once determined, you can choose the right version of value.
For your case, the second node returns the new value(with the help of coordinator to find out new value is really committed, and old value is stale).
---------- edit --------------
Some implementations use Exclusive Lock during prepare phase, which means, once prepared, no other can read or write the prepared resource. So, before participant committed, any one tries to read, must wait.
If the coordinator is asking them to commit, then it means that all participants have already answered that they are prepared to commit. Prepared means that the participant is guaranteed to be able to commit. There is no failure. If the node vanished in a meteor strike, then the node is restored from the HA/DR data and the restored mode resumes the transaction and proceeds with the commit.
Participants in 2PC are durable, persisted coordinators capable of backup and restore. In theory in the case when one of the participants cannot be restored, then every participant, and the coordinators, are all restored back in time before the last coordinated transaction. In practice, all coordinators support enforcing cases when a participant is lost and the transaction will be manually forced into one state or another, see Resolve Transactions Manually or Resolving indoubt transactions manually.
All the samples usually demonstrate some sort of change to reliable collections with CommitAsync() or rollback in case of a failure. My code is using TryRemoveAsync(), so failure is not a concern (will be retried later).
Is there a significant downside to invoking tx.CommitAsync() when no changes to reliable collections where performed?
Whenever you open a Transaction and execute commands against a collection, these commands acquire locks in the TStore(Collection) and are recorded to the transaction temporary dictionary(Change tracking) and also to the transaction logs, the replicator then will forward these changes to the replicas.
Once you execute the tx.CommitAsync() the temporary records are saved to the disk, the transaction is registered in the logs and then replicated to secondary replicas to also commit and save to the disk, and then the locks are released.
If the collection is not modified, the transaction won't have anything to save\replicate and will just close the transaction.
If you don't call tx.CommitAsync() after the operation, the transaction is aborted and any pending operations(if any) are discarded and the abort operation is written to the logs to notify other replicas.
In both cases, Commit and Abort, will generate logs(and replicate them), The only detail I am not sure is if these logs are also generated when no changes are in place, I assume they are. Regarding performance, the act of reading or attempting to change a collection, will acquire locks and need to be released with a commit or abort, I think these are to biggest impact on your code, because they will prevent other threads of modifying it while you not complete the transaction. In this case I wouldn't be too worried committing an empty transaction.
// Create a new Transaction object for this partition
using (ITransaction tx = base.StateManager.CreateTransaction()) {
//modify the collection
await m_dic.AddAsync(tx, key, value, cancellationToken);
// CommitAsync sends Commit record to log & secondary replicas
// After quorum responds, all locks released
await tx.CommitAsync();
} // If CommitAsync not called, this line will Dispose the transaction and discard the changes
You can find most of these details on this documentation
If you really want to go deep on implementation details to answer this question, I suggest you dig the answer in the source code for the replicator here
I am trying to implement transactions for distributed services in java over REST. I have some questions to ask.
What happens when resources reply affirmatively and in phase 2 they fail to commit?
I tried to search but unfortunately I could not find a proper answer to what happens when rollback fails in 2PC protocol. I know that its a blocking protocol and it waits for response for infinite time, but what happens in real world scenario?
what are the other protocols for distributed transaction management?
I read about JTA for transaction implementation, but is there any other implementation which can be used to implement transactions?
Any reply will be helpful. Thanks in advance.
I don't have answers to these questions but I created a specific method for my specific case. So posting here if some one need transactions for the same cases.
Since In my case there is no change to current entries in database (or indexer, which is also running as a service) but there were only new entries in system at different places, so the false failures were not harmful but false success were. So for my particular case I followed following strategy:
i. All the resources adds a transaction id with the row in database. In first phase when coordinator ask resources, all resources makes entries in database with transaction id generated by coordinator.
ii. After phase 1, when all resources reply affirmatively that means resources have made changes to database, coordinator makes an entry in it's own log that transaction is successful and conveys the same to resources. All resources makes the transaction status successful in the row of data inserted.
iii. A service run continuously to search the database and correct the transaction status by asking the status from coordinator. If there is no entry or failure entry, transaction returns failure status, and same is updated on service. When fetching data, if there is an entry in database which has failure label, then it always checks the transaction status with coordinator, if there is no entry of failure it filters the results. Hence those data entries are not supplied for which there is no information or there is failure information. So the outcome is always consistent.
This strategy provides a way for atomicity for my case.