Transaction & Locks Problem - progress-4gl

with in do transaction, i defined a label and in this label i am accessing a table with exclusive-lock.and at the end of label i have done all the changes in that table. bt now i am with in transaction block.
Now, i tried to access that same table in another session.then it show an error, Table used by another user. So is it possible that, can we release teh table with in transaction,so another user can access it.
For example:
Session 1)
DO TRANSACTION:
---
---
loopb:
REPEAT:
--
--
---------------------> control is here right now.
END. /*repeat*/
--
--
END. /*do transaction*/
Session 2)
I tried to access same table, but it show an error, that table locked by another user.

All those records you touched in the loop using EXCLUSIVE-LOCK will not be available to be locked by another user until the TRANSACTION is complete. There is no getting around this. If the second process needs to lock those records, then all you can do is decrease your TRANSACTION scope in the first process. This is a safety feature so that if an error happens later on in the TRANSACTION, all the changes made during the TRANSACTION will be rolled back. Another way to look at it is if you could release some record locks during a TRANSACTION, you would lose the atomicity (all-or-nothingness) that is part of the definition of a TRANSACTION.
It should be noted that if you don't really need to lock those records in the second process but just need to see their updated value, that is possible. Once the updated records are no longer in the record buffer (or the record lock status is downgraded to a NO-LOCK in the TRANSACTION), they will become limbo locks and you can view their updated values using a NO-LOCK. To make the last record in the loop become a limbo lock, you can either do this
FIND CURRENT tablerecord NO-LOCK.
Or this, if you do not need to access the record buffer any longer:
RELEASE tablerecord.

Other sessions can do a "dirty read" of the record using NO-LOCK. But they will not be able to lock it or update it until the transaction is committed (or rolled back). And that won't happen until the repeat block iterates or you leave it.

Related

IBM CDC Table should already have been refreshed. Transformation Server will terminate

I have a table with both source and target as IBM DB2 iSeries. The replication method is Mirror. After refresh before mirroring, the message Table <lib>/<table> should already have been refreshed. Transformation Server will terminate. occurs and the state of table stays as Refresh. Other tables in the same subscription are running normally. Below is the detailed log:
source
Table lib/table, member table will be refreshed to subscription.
Table lib/table, member table refresh to subscription is complete 200000 rows sent.
Table lib/table member table could not be refreshed.
Table lib/table should already have been refreshed. Transformation Server will terminate.
target
Refresh started for target table lib/table, member *ONLY.
220310 rows deleted from member *FIRST of table lib/table.
Refresh completed for table lib/table, member *ONLY. 200000 rows received, 199500 rows successfully applied, 500 rows failed.
Does anyone have any ideas towards this kind of situation?
CDC iSeries will try to get a very short exclusive lock (allow read) to ensure that the there are no uncommitted commit cycles involving the table at the time that the refresh starts. if it cannot get a lock then it skips the refresh, moves on to the next table, posts the message that you have reported.
So you will need to run the refresh of the table at a time of low activity on the table (or no activity).
This lock is required to ensure consistency if the source application is updating the table under commitment control, as the journal scraper would otherwise ignore any transactions belonging to a commit cycle that started before the refresh itself started.
If the source application is not using commitment control at all and the iSeries is the only source then you can get the target to ignore commitment control. The source will then know not to try the lock.
To turn off commitment control for a Java-based target add the target system parameter mirror_commit_on_transaction_boundary and set it to false, if the target is iSeries change the target commitment control parameter to *NONE.
Please be sure that commitment control is not used at all if you make this change on the target as otherwise you may have some troublesome synchronisation issues if changes are rolled back concurrent with a table refresh
May be seeing the job log would give more clarity as what is the cause for this behavior, as this could happen in many reasons.
One of the thing that can be try is, in Management Console select mapped tables
parked the table
Flag it to Refresh and start subscription it will refresh the table and enters into "Active" state.
Thanks

PostgreSQL: update operation synchronization

For example I have query
UPDATE foo_table SET viewed_count = coalesce(viewed_count, 0) + 1
suppose 100 clients execute this query at same time
Is there any guarantees that viewed_count will be incremented by 100?
Yes.
After obtaining the lock on the row, a transaction in READ COMMITTED isolation level will re-read the current version of the row.
See the documentation:
UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the same as SELECT in terms of searching for target rows: they will only find target rows that were committed as of the command start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the would-be updater will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the second updater can proceed with updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of the row.

Postgres Lock(For Update) row level locking returns value post lock

I am trying to implement rowlevel lock using below query
BEGIN;
SELECT * FROM public.student where student_code=1 For update;
The above query i ran in pgadmin.
Now through the application if i try to do a select the select returns a value. Ideally this should not
What am i doing wrong here?
This is a feature, not a bug; see the documentation:
Read Committed Isolation Level
[...]
UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the same as SELECT in terms of searching for target rows: they will only find target rows that were committed as of the command start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the would-be updater will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the second updater can proceed with updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of the row. The search condition of the command (the WHERE clause) is re-evaluated to see if the updated version of the row still matches the search condition. If so, the second updater proceeds with its operation using the updated version of the row. In the case of SELECT FOR UPDATE and SELECT FOR SHARE, this means it is the updated version of the row that is locked and returned to the client.
After all, that is the meaning of “READ COMMITTED” – you see that latest committed version of the row..
If you want to avoid that, you can use the REPEATABLE READ isolation level. Then you get read stability, so that you only see the same state of the database for the whole transaction. In that case, you will receive a serialization error, since the row version that can be locked (the latest one) is not the one you see.

Suspend transaction in Postgres

I have seen another database system that offers to suspend transaction. The current transaction is kept intact but put on hold while your code is allowed to work with the database to effect immediate permanent changes to rows. Then you can resume transaction, continuing where you left off with the same locks and other transaction protections in place as if you'd never interrupted it.
For example, say an customer is placing an order, in a transaction. During that transaction, the customer notices their phone number needs to be updated, so we change that data. Next, customer decides to cancel the not-yet-completed order. A rollback of the order has the unintended consequence of also undoing the phone number change. So it would be nice if we could:
Suspend the transaction for the order.
Update the phone number, committed to the database.
Resume the transaction for the order.
Is there some way to suspend a transaction in Postgres? In JDBC?
If a transaction cannot continue, it must roll back.
If your transaction has a point at which you don't know how to carry on, then your transaction logic is flawed, you need to reorganize it - either split into multiple transactions (or sub-transactions, aka save points), or take out the parts that do not belong to the transaction logic.
Is there some way to suspend a transaction in Postgres?
No, no such thing. And the data integrity principle is unconditional as to time.
No.
The closest things are
prepared transactions: this allows (with some conditions) for a transaction to be saved, and then later rolled back or committed.
savepoints: this allows for "nested transactions", where portions of transactions can be rolled back .
Neither of these fit exactly what you are looking for. It seems that our example has two operations that do not need to be part of the same transaction at all, since the phone number update appears to be unrelated to the success of the order. (Also, a long-running transaction is a bad idea....your order should probably be a state machine implemented without long-running transaction.)
Workaround – open second connection
In JDBC, you could just open a second connection to the database.
Do your separate work on that second connection and close. The first connection is still open and remains in its same state. Any active transaction in that first connection remains.

Executing a trigger AFTER the completion of a transaction

In PostgreSQL, are DEFERRED triggers executed before (within) the completion of the transaction or just after it?
The documentation says:
DEFERRABLE
NOT DEFERRABLE
This controls whether the constraint can be deferred. A constraint
that is not deferrable will be checked immediately after every
command. Checking of constraints that are deferrable can be postponed
until the end of the transaction (using the SET CONSTRAINTS command).
It doesn't specify if it is still inside the transaction or out. My personal experience says that it is inside the transaction and I need it to be outside!
Are DEFERRED (or INITIALLY DEFERRED) triggers executed inside of the transaction? And if they are, how can I postpone their execution to the time when the transaction is completed?
To give you a hint what I'm after, I'm using pg_notify and RabbitMQ (PostgreSQL LISTEN Exchange) to send out messages. I process such messages in an external application. Right now I have a trigger which notifies the external app of the newly inserted records by including the record's id in the message. But in a non-deterministic way, once in a while, when I try to select a record by its id at hand, the record can not be found. That's because the transaction is not complete yet and the record is not actually added to the table. If I can only postpone the execution of the trigger for after the completion of the transaction, everything will work out.
In order to get better answers let me explain the situation even closer to the real world. The actual scenario is a little more complicated than what I explained before. The source code can be found here if anyone's interested. Becuase of reasons that I'm not gonna dig into, I have to send the notification from another database so the notification is actually sent like:
PERFORM * FROM dblink('hq','SELECT pg_notify(''' || channel || ''', ''' || payload || ''')');
Which I'm sure makes the whole situation much more complicated.
Triggers (including all sorts of deferred triggers) fire inside the transaction.
But that is not the problem here, because notifications are delivered between transactions anyway.
The manual on NOTIFY:
NOTIFY interacts with SQL transactions in some important ways.
Firstly, if a NOTIFY is executed inside a transaction, the notify
events are not delivered until and unless the transaction is
committed. This is appropriate, since if the transaction is aborted,
all the commands within it have had no effect, including NOTIFY. But
it can be disconcerting if one is expecting the notification events to
be delivered immediately. Secondly, if a listening session receives a
notification signal while it is within a transaction, the notification
event will not be delivered to its connected client until just after
the transaction is completed (either committed or aborted). Again, the
reasoning is that if a notification were delivered within a
transaction that was later aborted, one would want the notification to
be undone somehow — but the server cannot "take back" a notification
once it has sent it to the client. So notification events are only
delivered between transactions. The upshot of this is that
applications using NOTIFY for real-time signaling should try to keep
their transactions short.
Bold emphasis mine.
pg_notify() is just a convenient wrapper function for the SQL NOTIFY command.
If some rows cannot be found after a notification has been received, there must be a different cause! Go find it. Likely candidates:
Concurrent transactions interfering
Triggers doing something more or different than you think they do.
All sorts of programming errors.
Either way, like the manual suggests, keep transactions that send notifications short.
dblink
Update: Transaction control in a PROCEDURE or DO statement in Postgres 11 or later makes this a lot simpler. Just COMMIT; to (also) send waiting notifications.
Original answer (mostly for Postgres 10 or older):
PERFORM * FROM dblink('hq','SELECT pg_notify(''' || channel || ''', ''' || payload || ''')');
... which should be rewritten with format() to simplify and make the syntax secure:
PRERFORM dblink('hq', format('NOTIFY %I, %L', channel, payload));
dblink is a game-changer here, because it opens a separate transaction in the other database. This is sometimes used to fake autonomous transaction.
Does Postgres support nested or autonomous transactions?
How do I do large non-blocking updates in PostgreSQL?
dblink() waits for the remote command to finish. So the remote transaction will most probably commit first. The manual:
The function returns the row(s) produced by the query.
If you can send notification from the same transaction instead, that would be a clean solution.
Workaround for dblink
If notifications have to be sent from a different transaction, there is a workaround with dblink_send_query():
dblink_send_query sends a query to be executed asynchronously, that is, without immediately waiting for the result.
DO -- or plpgsql function
$$
BEGIN
-- do stuff
PERFORM dblink_connect ('hq', 'your_connstr_or_foreign_server_here');
PERFORM dblink_send_query('con1', format('SELECT pg_sleep(3); NOTIFY %I, %L ', 'Channel', 'payload'));
PERFORM dblink_disconnect('con1');
END
$$;
If you do this right before the end of the transaction, your local transaction gets 3 seconds (pg_sleep(3)) head start to commit. Chose an appropriate number of seconds.
There is an inherent uncertainty to this approach, since you get no error message if anything goes wrong. For a secure solution you need a different design. After successfully sending the command, chances for it to still fail are extremely slim, though. The chance that successful notifications are missed seem much higher, but that's built into your current solution already.
Safe alternative
A safer alternative would be to write to a queue table and poll it like discussed in #Bohemian's answer. This related answer demonstrates how to poll safely:
Postgres UPDATE … LIMIT 1
I'm posting this as an answer, assuming the actual problem you are trying to solve is deferring execution of an external process until after the transaction is completed (rather than the X-Y "problem" you're trying to solve using trigger Kung Fu).
Having the database tell an app to do something is a broken pattern. It's broken because:
There's no fallback if the app doesn't get the message, eg because it's down, network explodes, whatever. Even the app replying with an acknowledgment (which it can't), wouldn't fix this problem (see next point)
There's no sensible way to retry the work if the app gets the message but fails to complete it (for any of lots of reasons)
In contrast, using the database as a persistant queue, and having the app poll it for work, and take the work off the queue when work is complete, has none of the above problems.
There are lots of ways to achieve this. The one I prefer is to have some process (usually trigger on insert, update and delete) put data into a "queue" table. Have another process poll that table for work to do, and delete from the table when work is complete.
It also adds some other benefits:
The production and consumption of work is decoupled, which means you can safely kill and restart your app (which must happen from time to time, eg deploying) - the queue table will happily grow while the app is down, and will drain when the app is back up. You can even replace the app with an entirely new one
If for whatever reason you want to initiate processing of certain items, you can just manually insert rows into the queue table. I used this technique myself to initiate the processing of all items in a database that needed initialising by being put on the queue once. Importantly, I didn't need to do a perfunctory update to every row just to fire the trigger
Getting to your question, a slight delay can be introduced by adding a timestamp column to the queue table and having the poll query only select rows that are older than (say) 1 second, which gives the database time to complete its transaction
You can't overload the app. The app will read only as much work as it can handle. If your queue is growing, you need a faster app, or more apps If multiple consumers are operating, concurrency can be solved by (for example) adding a "token" column to the queue table
Queues that are backed by database tables is the basis of how persistent queues are implemented in commercial grade queue-based platforms, so the pattern is well tested, used and understood.
Leave the database to do what it does best, and the only thing it does well: Manage data. Don't try to make your database server into an app server.