Firebird 2.5 exception handling within autonomous transaction - firebird

I'm experiencing performance drop in one of our Firebird stored procedures and I have no clue why. I have found the following code in the mentioned SP:
declare v_dummy integer;
...
in autonomous transaction do
begin
-- insert may fail, but that is not a problem because it means the record is already there
insert into my_table(my_field) values (:input_param);
when ANY do
v_dummy = 1;
end
I see few dozens of records in RDB$TRANSACTIONS table with STATE 3, no relevant records in MON$TRANSACTIONS table.
The question is, if the insert fails will the autonomous transaction be rolled back or does the "when ANY do" prevent the rollback and there will be an opened transaction? Can I just remove the exception handling, so it will be rolled back automatically without raising an exception and blocking the rest of the code?

Using a when any do inside an autonomous transaction block will not rollback the transaction, instead it will commit once the block ends because the exception does not escape the block.
However, this is probably the desired result: committing transactions in Firebird is (relatively) cheaper than rolling back. In fact, if a transaction rolls back when nothing was changed, Firebird will convert a rollback into a commit anyway.
I don't think this is the cause of your performance problem, but without reproducible example, it is hard to reason about this.
As an aside, transactions with state 3 are rolled back, and rolled back transactions have ended. MON$TRANSACTIONS only shows active transactions, so rolled back transactions will not be shown in that virtual table.

Related

commit to single table and roll back to others

I have to create records in multiple tables one after another serially and if there is some exception in data I need to log it into some exception table.
The problem is in case of exception (which is a data related issue, nothing to do with DB) I need to rollback all the Inserts but the entry into the exception table shouldn't be rolled back.
What should I do as as per my understanding a COMMIT statement will commit all the Inserts along with the exception table Insert?
You may use an AUTONOMOUS routine for logging.
Check the CREATE PROCEDURE statement description.
AUTONOMOUS
Indicates the procedure should execute in its own autonomous transaction scope.

Discover if another record is being inserted right now by another transaction in postgresql

Imagine there is an open, ongoing transaction in Postgresql inserting a record and doing something else as well.
BEGIN
INSERT INTO films(id, name) VALUES(10, 'A comedy')
# We are at this moment in time
# The transaction is not yet committed
# ...
COMMIT
Is there any non-blocking way to discover from outside of this transaction that there is an ongoing transaction inserting record with ID=10 right now?
The only way I could think about was:
BEGIN
SET statement_timeout to 100
INSERT INTO films(id, name) VALUES(10, '') ON CONFLICT (id) DO NOTHING
ROLLBACK
if I get a timeout from INSERT than it means that there is another ongoing transaction
if I inserted nothing then there was a transaction which is now finished and conflict on unique ID occured
if the INSERT succeeded and thenI rolled-back than it means there is no transaction right now trying to insert a row with ID=10
However this is less than ideal as:
It is not non-blocking, I am waiting 100ms here
I am doing an INSERT operation, whereas I would prefer a read-only solution
As far as I understand I am actively triggering a conflict but I cannot in any easy way enforce that the 2nd transaction gets deadlock and that I won't ever interrupt the work of the 1st transaction
I am effectively trying to workaround the lack of READ UNCOMMITTED transaction isolation level in Postgresql.
I am in charge of both parts of the code so I can change them in any way necessary to allow it.

Postgres returns errors on future transactions

I am currently migrating from MySQL to postgres using pgbouncer for my connection pool.
We select/insert/update/delete lots of data from postgres and all comes from remote sources so we try to make the data quality as good as possible before an insert but sometimes some bad data slips through.
This causes in postgres to report current transaction is aborted, commands ignored until end of transaction block
This is fine except that connection through pgbouncer will report this error for every query. I get the same logic if i connect directly to postgres instead of pgbouncer too. I'd expect it to roll back whichever transaction that caused this issue.
Is there a way to just rollback and continue working like normal? Everything i've read just says fix the query but in this case it's not always possible.
You need to use the ROLLBACK command. This will undo everything since the last BEGIN TRANSACTION or START TRANSACTION. Note that transactions do not nest; if you've begun multiple transactions without committing, this will roll back the outermost transaction.
This will drop you into autocommit mode. You may want to issue a new BEGIN TRANSACTION command to open a new transaction.
You should also be able to ROLLBACK TO SAVEPOINT, if you have a savepoint from before the error.
(If at all possible, it is preferred to just fix the query, but depending on what you're doing, that may be prohibitively difficult.)

How to wait during SELECT that pending INSERT commit?

I'm using PostgreSQL 9.2 in a Windows environment.
I'm in a 2PC (2 phase commit) environment using MSDTC.
I have a client application, that starts a transaction at the SERIALIZABLE isolation level, inserts a new row of data in a table for a specific foreign key value (there is an index on the column), and vote for completion of the transaction (The transaction is PREPARED). The transaction will be COMMITED by the Transaction Coordinator.
Immediatly after that, outside of a transaction, the same client requests all the rows for this same specific foreign key value.
Because there may be a delay before the previous transaction is really commited, the SELECT clause may return a previous snapshot of the data. In fact, it does happen sometimes, and this is problematic. Of course the application may be redesigned but until then, I'm looking for a lock solution. Advisory Lock ?
I already solved the problem while performing UPDATE on specific rows, then using SELECT...FOR SHARE, and it works well. The SELECT waits until the transaction commits and return old and new rows.
Now I'm trying to solve it for INSERT.
SELECT...FOR SHARE does not block and return immediatley.
There is no concurrency issue here as only one client deals with a specific set of rows. I already know about MVCC.
Any help appreciated.
To wait for a not-yet-committed INSERT you'd need to take a predicate lock. There's limited predicate locking in PostgreSQL for the serializable support, but it's not exposed directly to the user.
Simple SERIALIZABLE isolation won't help you here, because SERIALIZABLE only requires that there be an order in which the transactions could've occurred to produce a consistent result. In your case this ordering is SELECT followed by INSERT.
The only option I can think of is to take an ACCESS EXCLUSIVE lock on the table before INSERTing. This will only get released at COMMIT PREPARED or ROLLBACK PREPARED time, and in the mean time any other queries will wait for the lock. You can enforce this via a BEFORE trigger to avoid the need to change the app. You'll probably get the odd deadlock and rollback if you do it that way, though, because INSERT will take a lower lock then you'll attempt lock promotion in the trigger. If possible it's better to run the LOCK TABLE ... IN ACCESS EXCLUSIVE MODE command before the INSERT.
As you've alluded to, this is mostly an application mis-design problem. Expecting to see not-yet-committed rows doesn't really make any sense.

Transactions, when should be discarded and rolledback

I'm trying to debug an application (under PostgreSQL) and came across the following error:
"current transaction is aborted, commands ignored".
As far as I can understand a "transaction" is just a notion related to the underlying database connection.
If the connection has an auto commit "false", you can execute queries through the same Statement as long as it isn't failing. In which case you should rollback.
If auto commit is "true" then it doesn't matter as long as all your queries are considered atomic.
Using auto commit false, I get the aforementioned error by PostgreSQL even when a simple
select * from foo
fails, which makes me ask, under which SQLException(s) is a "transaction" considered invalid and should be rolled backed or not used for another query?
using MacOS 10.5, Java 1.5.0_16, PostgreSQL 8.3 with JDBC driver 8.1-407.jdbc3
That error means that one of the queries sent in a transaction has failed, so the rest of the queries are ignored until the end of the current transaction (which will automatically be a rollback). To PostgreSQL the transaction has failed, and it will be rolled back in any case after the error with one exception. You have to take appropriate measures, one of
discard the statement and start anew.
use SAVEPOINTs in the transaction to be able to get back to that point in time and try another path. (This is the exception)
Enable query logging to see which query is the failing one and why.
In any case the exact answer to your question is that any SQLException should mean a rollback happened when the end of transaction command is sent, that is when a COMMIT or ROLLBACK (or END) is issued. This is how it works, if you use savepoints you'll still be bound by the same rules, you'll just be able to get back to where you saved and try something else.
It seems to be a characteristic behaviour of PostgreSQL that is not shared by most other DBMS. In general (outside of PostgreSQL), you can have one operation fail because of an error and then, in the same transaction, can try alternative actions that will succeed, compensating for the error. One example: consider a merging (insert/update) operation. If you try to INSERT the new record but find that it already exists, you can switch to an UPDATE operation that changes the existing record instead. This works fine in all the main DBMS. I'm not certain that it does not work in PostgreSQL, but the descriptions I've seen elsewhere, as well as in this question, suggest that when the attempted INSERT means that any further activity in the transaction is doomed to fail too. Which is at best draconian and at worst 'unusable'.