Does it matter for a SERIALIZABLE transaction if any other session uses e.g. autocommit or the READ COMMITED isolation level?
In other words is there any danger in mixing isolation levels (& autocommit) when accessing a database from multiple processes/threads (or anything else to watch out for)?
Note that I'm aware of the "ordinary" issues, like SERIALIZABLE transactions asking for a retry etc. I'm asking for anything non-obvious that can happen when one is mixing different isolation levels.
EDIT:
From http://www.postgresql.org/docs/9.4/static/transaction-iso.html:
Consistent use of Serializable transactions can simplify development. The guarantee that any set of concurrent serializable transactions will have the same effect as if they were run one at a time means that if you can demonstrate that a single transaction, as written, will do the right thing when run by itself, you can have confidence that it will do the right thing in any mix of serializable transactions, even without any information about what those other transactions might do.
That could indicate that mixing isolation levels is not a good idea. On the other hand it merely says that consistent use of the SERIALIZABLE level is good, and not that mixing isolation levels is bad.
Postgres wiki https://wiki.postgresql.org/wiki/Serializable#PostgreSQL_Implementation states this
Any transaction which is run at a transaction isolation level other than SERIALIZABLE will not be affected by SSI. If you want to enforce business rules through SSI, all transactions should be run at the SERIALIZABLE transaction isolation level, and that should probably be set as the default.
So, SERIALIZABLE guarantees won't hold when mixing isolation levels.
SERIALIZABLE
All statements of the current transaction can only see rows committed before the first query or data-modification statement was executed in this transaction. If a pattern of reads and writes among concurrent serializable transactions would create a situation which could not have occurred for any serial (one-at-a-time) execution of those transactions, one of them will be rolled back with a serialization_failure error.
That means, a SERIALIZABLE transaction only act differently, when running against another SERIALIZABLE transaction(s). If they run against non-SERIALIZABLE transactions, they should act as they were REPEATABLE READ transactions. That suggests, it's completely safe to mix these transaction isolation levels.
Related
I'm trying to get a better understanding of the lock acquisition behavior on MongoDB transactions. I have a scenario where two concurrent transactions try to modify the same document. Since one transaction will get the write lock on the document first, the second transaction will run into a write conflict and fail.
I stumbled upon the maxTransactionLockRequestTimeoutMillis setting as documented here: https://docs.mongodb.com/manual/reference/parameters/#param.maxTransactionLockRequestTimeoutMillis and it states:
The maximum amount of time in milliseconds that multi-document transactions should wait to acquire locks required by the operations in the transaction.
However, changing this value does not seem to have an impact on the observed behavior with a write conflict. Transaction 2 does not seem to wait for the lock to be released again but immediately runs into a write conflict when another transaction holds the lock (other than concurrent writes outside a transaction which will block and wait for the lock).
Do I understand correctly that the configured time in maxTransactionLockRequestTimeoutMillis does not include the act of actually receiving the write lock on the document or is there something wrong with my tests?
In PostgreSQL (and other MVCC databases), transactions can rollback due to a deadlock or serialization error. Assume two transactions are currently running, is it ever possible that both, instead of just one, transaction will fail due to this kind of errors?
The reason why I am asking is that I am writing a retry implementation. If both transactions can fail, we might end up in a never-ending loop of retries if both retry immediately. If only one transaction can fail, I don't see any harm in retrying as soon as possible.
Yes. A deadlock can involve more than two transactions. In this case more than one may be terminated. But this is an extremely rare condition. Normally.
If just two transactions deadlock, one survives. The manual:
PostgreSQL automatically detects deadlock situations and resolves them by aborting one of the transactions involved, allowing the other(s) to complete.
Serialization failures only happen in REPEATABLE READ or SERIALIZABLE transaction isolation. I wouldn't know of any particular limit to how many serialization failures can happen concurrently. But I also never heard of any necessity to delay retrying.
I would retry as soon as possible either way.
If I execute some SQL inside a transaction successfully, can it happens that the commit will fail? And what are possible causes? Can it fail related to the executed queries, or just due to some DB side issues?
The question comes up because I need to judge if it makes sense to commit transactions inside tests or if it is "safe enough" to just rollback after each test case.
If I execute some SQL inside a transaction successfully, can it happens that the commit will fail?
Yes.
And what are possible causes?
DEFERRABLE constraints with SET CONSTRAINTS DEFERRED or in a one-statement autocommit transaction. (Can't happen unless you use DEFERRABLE constraints)
SERIALIZABLE transaction with serialization failure detected at commit time. (Can't happen unless you use SERIALIZABLE transactions)
Asynchronous commit where the DB crashes or is shut down. (Can't happen if synchronous_commit = on, the default)
Disk I/O error, filesystem error, etc
Out-of-memory error
Network error leading to session disconnect after you send the commit but before you get confirmation of success. In this case you don't know for sure if it committed or not.
... probably more
Can it fail related to the executed queries, or just due to some DB side issues?
Either. A serialization failure, for example, is definitely related to the queries run.
If you're using READ COMMITTED isolation with no deferred constraints then commits are only likely to fail due to underlying system errors.
The question comes up because I need to judge if it makes sense to commit transactions inside tests or if it is "safe enough" to just rollback after each test case.
Any sensible test suite has to cover multiple concurrent transactions interacting, committing in different orders, etc.
If all you test is single standalone transactions you're not testing the real system.
So the question is IMO moot, because a decent suite of tests has to commit anyway.
Anyone know of a message bus implementation which offers granular control over consistency guarantees? Full ACID is too slow and no ACID is too wrong.
We're currently using Rhino ESB wrapping MSMQ for our messaging. When using durable, transactional messaging with distributed transactions, MSMQ can block the commit for considerable time while it waits on I/O completion.
Our messages fall into two general categories: business logic and denormalisation. The latter account for a significant percentage of message bus traffic.
Business logic messages require the guarantees of full ACID and MSMQ has proven quite adequate for this.
Denormalisation messages:
MUST be durable.
MUST NOT be processed until after the originating transaction completes.
MAY be processed multiple times.
MAY be processed even if the originating transaction rolls back, as long as 2) is adhered to.
(In some specific cases the durability requirements could probably be relaxed, but identifying and handling those cases as exceptions to the rule adds complexity.)
All denormalisation messages are handled in-process so there is no need for IPC.
If the process is restarted, all transactions may be assumed to have completed (committed or rolled back) and all denormalisation messages not yet processed must be recovered. It is acceptable to replay denormalisation messages which were already processed.
As far as I can tell, messaging systems which deal with transactions tend to offer a choice between full ACID or nothing, and ACID carries a performance penalty. We're seeing calls to TransactionScope#Commit() taking as long as a few hundred milliseconds in some cases depending on the number of messages sent.
Using a non-transactional message queue causes messages to be processed before their originating transaction completes, resulting in consistency problems.
Another part of our system which has similar consistency requirements but lower complexity is already using a custom implementation of something akin to a transaction log, and generalising that for this use case is certainly an option, but I'd rather not implement a low-latency, concurrent, durable, transactional messaging system myself if I don't have to :P
In case anyone's wondering, the reason for requiring durability of denormalisation messages is that detecting desyncs and fixing desyncs can be extremely difficult and extremely expensive respectively. People do notice when something's slightly wrong and a page refresh doesn't fix it, so ignoring desyncs isn't an option.
It's not exactly the answer you're looking for, but Jonathan Oliver has written extensively on how to avoid using distributed transactions in messaging and yet maintain transactional integrity:
http://blog.jonathanoliver.com/2011/04/how-i-avoid-two-phase-commit/
http://blog.jonathanoliver.com/2011/03/removing-2pc-two-phase-commit/
http://blog.jonathanoliver.com/2010/04/idempotency-patterns/
Not sure if this helps you but, hey.
It turns out that MSMQ+SQL+DTC don't even offer the consistency guarantees we need. We previously encountered a problem where messages were being processed before the distributed transaction which queued them had been committed to the database, resulting in out-of-date reads. This is a side-effect of using ReadCommitted isolation to consume the queue, since:
Start transaction A.
Update database table in A.
Queue message in A.
Request commit of A.
Message queue commits A
Start transaction B.
Read message in B.
Read database table in B, using ReadCommitted <- gets pre-A data.
Database commits A.
Our requirement is that B's read of the table block on A's commit, which requires Serializable transactions, which carries a performance penalty.
It looks like the normal thing to do is indeed to implement the necessary constraints and guarantees oneself, even though it sounds like reinventing the wheel.
Anyone got any comments on this?
If you want to do this by hand, here is a reliable approach. It satisfies (1) and (2), and it doesn't even need the liberties that you allow in (3) and (4).
Producer (business logic) starts transaction A.
Insert/update whatever into one or more tables.
Insert a corresponding message into PrivateMessageTable (part of the domain, and unshared, if you will). This is what will be distributed.
Commit transaction A. Producer has now simply and reliably performed its writes including the insertion of a message, or rolled everything back.
Dedicated distributer job queries a batch of unprocessed messages from PrivateMessageTable.
Distributer starts transaction B.
Mark the unprocessed messages as processed, rolling back if the number of rows modified is different than expected (two instances running at the same time?).
Insert a public representation of the messages into PublicMessageTable (a publically exposed table, in whatever way). Assign new, strictly sequential Ids to the public representations. Because only one process is doing these inserts, this can be guaranteed. Note that the table must be on the same host to avoid 2PC.
Commit transaction B. Distributor has now distributed each message to the public table exactly once, with strictly sequantial Ids.
A consumer (there can be several) queries the next batch of messages from PublicMessageTable with Id greater than its own LastSeenId.
Consumer starts transaction C.
Consumer inserts its own representation of the messages into its own table ConsumerMessageTable (thus advancing LastSeenId). Insert-ignore can help protect against multiple instances running. Note that this table can be in a completely different server.
Commit transaction C. Consumer has now consumed each message exactly once, in the same order the messages were made publically available, without ever skipping a message.
We can do whatever we want based on the consumed messages.
Of course, this requires very careful implementation.
It is even suitable for database clusters, as long as there is only a single write node, and both reads and writes perform causality checks. It may well be that having one of these is sufficient, but I'd have to consider the implications more carefully to make that claim.
Can someone tell me the principle of how TransactionScope guarantees data integrity across multiple databases? I imagine it first sends the commands to the databases and then waits for the databases to respond before sending them a message to apply the command sent earlier. However when execution is stopped abruptly when sending those apply messages we could still end up with a database that has applied the command and one that has not. Can anyone shed some light on this?
Edit:
I guess what Im asking is can I rely on TransactionScope to guarantee data integrity when writing to multiple databases in case of a power outage or a sudden shutdown.
Thanks, Bas
Example:
using(var scope=new TransactionScope())
{
using (var context = new FirstEntities())
{
context.AddToSomethingSet(new Something());
context.SaveChanges();
}
using (var context = new SecondEntities())
{
context.AddToSomethingElseSet(new SomethingElse());
context.SaveChanges();
}
scope.Complete();
}
It promotes it to the Distributed Transaction Coordinator (msdtc) if it detects multiple databases which use each scope as a part of a 2-phase commit. Each scope votes to commit and hence we get the ACID properties but distributed accross databases. It can also be integrated with TxF, TxR. You should be able to use it the way you describe.
The two databases are consistent as the distributed COM+ transaction running under MTC have attached to them, database transactions.
If one database votes to commit (e.g. by doing a (:TransactionScope).Commit()), "it" tells the DTC that it votes to commit. When all databases have done this they have a change-list. As far as the database transactions don't deadlock or conflict with other transactions now (e.g. by means of a fairness algorithm that pre-empts one transaction) all operations for each database are in the transaction log. If the system loses power when not yet commit for one database has finished but it has for another, it has been recorded in the transaction log that all resources have voted to commit, so there is no logical implication that the commit should fail. Hence, next time the database that wasn't able to commit boots up, it will finish those transactions left in this indeterminate state.
With distributed transactions it can in fact happen that the databases become inconsistent. You said:
At some point both databases have to
be told to apply their changes. Lets
say there is a power outage after
telling the first db to apply, then
the databases are out of sync. Or am I
missing something?
You aren't. I think this is known as the generals problem. It can provably not be prevented. The windows of failure is however quite small.