I have to create records in multiple tables one after another serially and if there is some exception in data I need to log it into some exception table.
The problem is in case of exception (which is a data related issue, nothing to do with DB) I need to rollback all the Inserts but the entry into the exception table shouldn't be rolled back.
What should I do as as per my understanding a COMMIT statement will commit all the Inserts along with the exception table Insert?
You may use an AUTONOMOUS routine for logging.
Check the CREATE PROCEDURE statement description.
AUTONOMOUS
Indicates the procedure should execute in its own autonomous transaction scope.
Related
I was wondering if transactions in Postgres freeze the state of a table, similar to the way a point-in-time does in Elasticsearch.
If have a query with a where clause, and I inside a transaction I run it, first as a count(*), then with a select, and finally, with an update, do I need to be worried about a different process inserting a record into the db and throwing off the results?
In the default transaction isolation level READ COMMITTED, each statement in a transaction sees a different state (snapshot) of the database.
If you want all statements in a transaction to see the same snapshot, you will have to use the REPEATABLE READ isolation level. However, there is the possibility that concurrent data manipulations cause the UPDATE to fail with a serialization error, forcing you to repeat the transaction.
I'm pretty new to PostgreSQL and I'm sure I'm missing something here.
The scenario is with version 11, executing a big drop table and insert transaction on a given table with the nodejs driver, which may take 30 minutes.
While doing that, if I try to query with select on that table using the jdbc driver, the query execution waits for the transaction to finish. If I close the transaction (by finishing it or by forcing it to exit), the jdbc query becomes responsive.
I thought I can read a table with one connection while performing a transaction with another one.
What am I missing here?
Should I keep the table (without dropping it at the beginning of the transaction) ?
DROP TABLE takes an ACCESS EXCLUSIVE lock on the table, which is there precisely to prevent it from taking place concurrently with any other operation on the table. After all, DROP TABLE physically removes the table.
Since all locks are held until the end of the database transaction, all access to the dropped table is blocked until the transaction ends.
Of course the files are only removed when the transaction commits, so you might wonder why PostgreSQL doesn't let concurrent transactions read in the mean time. But that would mean that COMMIT may be blocked by a concurrent reader, or a SELECT might cause a system error in the middle of reading, both of which don't sound appealing.
I'm experiencing performance drop in one of our Firebird stored procedures and I have no clue why. I have found the following code in the mentioned SP:
declare v_dummy integer;
...
in autonomous transaction do
begin
-- insert may fail, but that is not a problem because it means the record is already there
insert into my_table(my_field) values (:input_param);
when ANY do
v_dummy = 1;
end
I see few dozens of records in RDB$TRANSACTIONS table with STATE 3, no relevant records in MON$TRANSACTIONS table.
The question is, if the insert fails will the autonomous transaction be rolled back or does the "when ANY do" prevent the rollback and there will be an opened transaction? Can I just remove the exception handling, so it will be rolled back automatically without raising an exception and blocking the rest of the code?
Using a when any do inside an autonomous transaction block will not rollback the transaction, instead it will commit once the block ends because the exception does not escape the block.
However, this is probably the desired result: committing transactions in Firebird is (relatively) cheaper than rolling back. In fact, if a transaction rolls back when nothing was changed, Firebird will convert a rollback into a commit anyway.
I don't think this is the cause of your performance problem, but without reproducible example, it is hard to reason about this.
As an aside, transactions with state 3 are rolled back, and rolled back transactions have ended. MON$TRANSACTIONS only shows active transactions, so rolled back transactions will not be shown in that virtual table.
Is it possible to find which records on which table are locked by a specific statement, transaction or attachment on FirebirdSQL 2.5?
I'm getting some Lock conflict on no wait transaction error messages on some situations.
I'm trying to trace the problem looking for ACTIVE transactions and statements using MON$STATEMENTS, MON$TRANSACTIONS and MON%ATTACHMENTS. I know the exact sql statement from MON$STATEMENTS but on some cases it is a stored procedure execution that updates many tables. So, finding the actual locked record or table would make analysis easier.
I'm using PostgreSQL 9.2 in a Windows environment.
I'm in a 2PC (2 phase commit) environment using MSDTC.
I have a client application, that starts a transaction at the SERIALIZABLE isolation level, inserts a new row of data in a table for a specific foreign key value (there is an index on the column), and vote for completion of the transaction (The transaction is PREPARED). The transaction will be COMMITED by the Transaction Coordinator.
Immediatly after that, outside of a transaction, the same client requests all the rows for this same specific foreign key value.
Because there may be a delay before the previous transaction is really commited, the SELECT clause may return a previous snapshot of the data. In fact, it does happen sometimes, and this is problematic. Of course the application may be redesigned but until then, I'm looking for a lock solution. Advisory Lock ?
I already solved the problem while performing UPDATE on specific rows, then using SELECT...FOR SHARE, and it works well. The SELECT waits until the transaction commits and return old and new rows.
Now I'm trying to solve it for INSERT.
SELECT...FOR SHARE does not block and return immediatley.
There is no concurrency issue here as only one client deals with a specific set of rows. I already know about MVCC.
Any help appreciated.
To wait for a not-yet-committed INSERT you'd need to take a predicate lock. There's limited predicate locking in PostgreSQL for the serializable support, but it's not exposed directly to the user.
Simple SERIALIZABLE isolation won't help you here, because SERIALIZABLE only requires that there be an order in which the transactions could've occurred to produce a consistent result. In your case this ordering is SELECT followed by INSERT.
The only option I can think of is to take an ACCESS EXCLUSIVE lock on the table before INSERTing. This will only get released at COMMIT PREPARED or ROLLBACK PREPARED time, and in the mean time any other queries will wait for the lock. You can enforce this via a BEFORE trigger to avoid the need to change the app. You'll probably get the odd deadlock and rollback if you do it that way, though, because INSERT will take a lower lock then you'll attempt lock promotion in the trigger. If possible it's better to run the LOCK TABLE ... IN ACCESS EXCLUSIVE MODE command before the INSERT.
As you've alluded to, this is mostly an application mis-design problem. Expecting to see not-yet-committed rows doesn't really make any sense.