Downsides of CommitAsync() w/o any changes to collection - azure-service-fabric

All the samples usually demonstrate some sort of change to reliable collections with CommitAsync() or rollback in case of a failure. My code is using TryRemoveAsync(), so failure is not a concern (will be retried later).
Is there a significant downside to invoking tx.CommitAsync() when no changes to reliable collections where performed?

Whenever you open a Transaction and execute commands against a collection, these commands acquire locks in the TStore(Collection) and are recorded to the transaction temporary dictionary(Change tracking) and also to the transaction logs, the replicator then will forward these changes to the replicas.
Once you execute the tx.CommitAsync() the temporary records are saved to the disk, the transaction is registered in the logs and then replicated to secondary replicas to also commit and save to the disk, and then the locks are released.
If the collection is not modified, the transaction won't have anything to save\replicate and will just close the transaction.
If you don't call tx.CommitAsync() after the operation, the transaction is aborted and any pending operations(if any) are discarded and the abort operation is written to the logs to notify other replicas.
In both cases, Commit and Abort, will generate logs(and replicate them), The only detail I am not sure is if these logs are also generated when no changes are in place, I assume they are. Regarding performance, the act of reading or attempting to change a collection, will acquire locks and need to be released with a commit or abort, I think these are to biggest impact on your code, because they will prevent other threads of modifying it while you not complete the transaction. In this case I wouldn't be too worried committing an empty transaction.
// Create a new Transaction object for this partition
using (ITransaction tx = base.StateManager.CreateTransaction()) {
//modify the collection
await m_dic.AddAsync(tx, key, value, cancellationToken);
// CommitAsync sends Commit record to log & secondary replicas
// After quorum responds, all locks released
await tx.CommitAsync();
} // If CommitAsync not called, this line will Dispose the transaction and discard the changes
You can find most of these details on this documentation
If you really want to go deep on implementation details to answer this question, I suggest you dig the answer in the source code for the replicator here

Related

Are the changes of a write transaction in ClientA immediately visible to a ClientB read, started after COMMIT?

We are observing some behaviours/errors in some of our workflows, related to the consistency and visiblity of a Postgres write transaction, followed by a read. One of our developers offered an explanation, but I could not find any search results documenting the proposed reasoning.
Given a single Postgres 10.3 host, the following operations take place:
ClientA performs a successful write transaction
After the COMMIT, an external notification is emitted
ClientB reacts to external notification and performs a read, only to find that the UPDATE transaction changes are not visible
The explanation that was proposed is that two postgres client connections on different threads don't have a guaranteed view snapshot and may not immediately observe the write transaction update after the commit. But from what I have read, I would expect that after the COMMIT has succeeded, a read operation then starting in response should see the effects of that write.
My specific question is: Given two database client connections on different threads, is it possible for a race condition with one client viewing the effects of a write transaction AFTER the other client has committed? (no overlapping transactions).
Every bit of documentation I have found thus far only refers to concerns about overlapping/concurrent transaction and the MVCC/transaction isolation topics. Nothing about a synchronised serial operation between two different client connections.
Edit: Some extra details about the configuration.
ClientA and ClientB would be different threads accessing postgres through a connection pool. Clients may both be in the same connection pool on the same application server, or it may be ClientA/ApplicationA and ClientB/ApplicationB.
When ClientB reacts, it will access the existing Application server connection pool to make a new read.
No, that cannot happen, unless the reading transaction started earlier and is running at the REPEATABLE READ or SERIALIZABLE isolation level.
There is also the possibility that the reading transaction does not connect to the same server as the writing transaction, but to a streaming replication standby server with hot_standby enabled. Then this can easily happen, even with synchronous replication (unless you set synchronous_commit = remote_apply).

maxTransactionLockRequestTimeoutMillis with concurrent transactions

I'm trying to get a better understanding of the lock acquisition behavior on MongoDB transactions. I have a scenario where two concurrent transactions try to modify the same document. Since one transaction will get the write lock on the document first, the second transaction will run into a write conflict and fail.
I stumbled upon the maxTransactionLockRequestTimeoutMillis setting as documented here: https://docs.mongodb.com/manual/reference/parameters/#param.maxTransactionLockRequestTimeoutMillis and it states:
The maximum amount of time in milliseconds that multi-document transactions should wait to acquire locks required by the operations in the transaction.
However, changing this value does not seem to have an impact on the observed behavior with a write conflict. Transaction 2 does not seem to wait for the lock to be released again but immediately runs into a write conflict when another transaction holds the lock (other than concurrent writes outside a transaction which will block and wait for the lock).
Do I understand correctly that the configured time in maxTransactionLockRequestTimeoutMillis does not include the act of actually receiving the write lock on the document or is there something wrong with my tests?

In 2PC what happens in case of failure to commit?

In 2PC what happens if coordinator asks 3 participants to commit and the second one fails with no response to the coordinator.
A client arrives asks the second node for the value, the second node has just come up but did not manage to commit so it returns an old value... Is that a fault of 2PC?
The missing part of 2PC - 2PR(2 Phases Read)
If any of the commit messages lost or doesn't take effect for some reason at some participants, the resource remains at prepared state(which is uncertain), even after a restart, because prepared state must be persisted in non-volatile storage before the coordinator can ever send a commit message.
Any one tries to read any uncertain resource, must refer to the coordinator to determine the exact state of that resource. Once determined, you can choose the right version of value.
For your case, the second node returns the new value(with the help of coordinator to find out new value is really committed, and old value is stale).
---------- edit --------------
Some implementations use Exclusive Lock during prepare phase, which means, once prepared, no other can read or write the prepared resource. So, before participant committed, any one tries to read, must wait.
If the coordinator is asking them to commit, then it means that all participants have already answered that they are prepared to commit. Prepared means that the participant is guaranteed to be able to commit. There is no failure. If the node vanished in a meteor strike, then the node is restored from the HA/DR data and the restored mode resumes the transaction and proceeds with the commit.
Participants in 2PC are durable, persisted coordinators capable of backup and restore. In theory in the case when one of the participants cannot be restored, then every participant, and the coordinators, are all restored back in time before the last coordinated transaction. In practice, all coordinators support enforcing cases when a participant is lost and the transaction will be manually forced into one state or another, see Resolve Transactions Manually or Resolving indoubt transactions manually.

What happens when rollback fails in two Phase Commit protocol

I am trying to implement transactions for distributed services in java over REST. I have some questions to ask.
What happens when resources reply affirmatively and in phase 2 they fail to commit?
I tried to search but unfortunately I could not find a proper answer to what happens when rollback fails in 2PC protocol. I know that its a blocking protocol and it waits for response for infinite time, but what happens in real world scenario?
what are the other protocols for distributed transaction management?
I read about JTA for transaction implementation, but is there any other implementation which can be used to implement transactions?
Any reply will be helpful. Thanks in advance.
I don't have answers to these questions but I created a specific method for my specific case. So posting here if some one need transactions for the same cases.
Since In my case there is no change to current entries in database (or indexer, which is also running as a service) but there were only new entries in system at different places, so the false failures were not harmful but false success were. So for my particular case I followed following strategy:
i. All the resources adds a transaction id with the row in database. In first phase when coordinator ask resources, all resources makes entries in database with transaction id generated by coordinator.
ii. After phase 1, when all resources reply affirmatively that means resources have made changes to database, coordinator makes an entry in it's own log that transaction is successful and conveys the same to resources. All resources makes the transaction status successful in the row of data inserted.
iii. A service run continuously to search the database and correct the transaction status by asking the status from coordinator. If there is no entry or failure entry, transaction returns failure status, and same is updated on service. When fetching data, if there is an entry in database which has failure label, then it always checks the transaction status with coordinator, if there is no entry of failure it filters the results. Hence those data entries are not supplied for which there is no information or there is failure information. So the outcome is always consistent.
This strategy provides a way for atomicity for my case.

How does TransactionScope guarantee data integrity across multiple databases?

Can someone tell me the principle of how TransactionScope guarantees data integrity across multiple databases? I imagine it first sends the commands to the databases and then waits for the databases to respond before sending them a message to apply the command sent earlier. However when execution is stopped abruptly when sending those apply messages we could still end up with a database that has applied the command and one that has not. Can anyone shed some light on this?
Edit:
I guess what Im asking is can I rely on TransactionScope to guarantee data integrity when writing to multiple databases in case of a power outage or a sudden shutdown.
Thanks, Bas
Example:
using(var scope=new TransactionScope())
{
using (var context = new FirstEntities())
{
context.AddToSomethingSet(new Something());
context.SaveChanges();
}
using (var context = new SecondEntities())
{
context.AddToSomethingElseSet(new SomethingElse());
context.SaveChanges();
}
scope.Complete();
}
It promotes it to the Distributed Transaction Coordinator (msdtc) if it detects multiple databases which use each scope as a part of a 2-phase commit. Each scope votes to commit and hence we get the ACID properties but distributed accross databases. It can also be integrated with TxF, TxR. You should be able to use it the way you describe.
The two databases are consistent as the distributed COM+ transaction running under MTC have attached to them, database transactions.
If one database votes to commit (e.g. by doing a (:TransactionScope).Commit()), "it" tells the DTC that it votes to commit. When all databases have done this they have a change-list. As far as the database transactions don't deadlock or conflict with other transactions now (e.g. by means of a fairness algorithm that pre-empts one transaction) all operations for each database are in the transaction log. If the system loses power when not yet commit for one database has finished but it has for another, it has been recorded in the transaction log that all resources have voted to commit, so there is no logical implication that the commit should fail. Hence, next time the database that wasn't able to commit boots up, it will finish those transactions left in this indeterminate state.
With distributed transactions it can in fact happen that the databases become inconsistent. You said:
At some point both databases have to
be told to apply their changes. Lets
say there is a power outage after
telling the first db to apply, then
the databases are out of sync. Or am I
missing something?
You aren't. I think this is known as the generals problem. It can provably not be prevented. The windows of failure is however quite small.