Any one know how interlocks work regarding trigger steps i.e. if a builder hold an exclusive lock over a builddir will it's trigger steps also inherit that lock or not?
No, there is no inheritance of locks between trigger-able steps and it's caller.
Related
What is the best way to track progress of a
long-running function in PostgreSQL 11?
Since every function executes in a single transaction, even if the function writes to some "log" table no other session/transaction can see this output unless the function completes with SUCCESS.
I read about some attempts here but they are from 2010.
https://www.endpointdev.com/blog/2010/04/viewing-postgres-function-progress-from/
Also, this approach looks terribly inconvenient.
As of today what is the best way to track progress?
One approach that I know... is to turn the func to a procedure and then do partial commits in the SP. But what if I want to return some result set from the func... In that case I cannot turn it into a SP, right? So... how to proceed in that case?
Many thanks in advance.
NOTE: The function is written in PL/pgSQL, the most common procedural SQL language available in PostgreSQL.
I don't know that there's a great way to do it built-in to postgres yet, but there are a couple ways to achieve logging that will be visible outside of a function.
You can use the pg_background extension to run an insert in the background that will be visible outside of the function. This requires compiling and installing this extension.
Use dblink to connect to the same database and insert data. This will most likely require setting up some permissions.
Neither option is ideal, but hopefully one can work for you. Converting your function to a procedure may also work, but you won't be able to call the procedure from within a transaction.
In my extension I have a set of operations that are generated by user activities. Each operation consists of several steps.
To handle those operations I implemented a scheduler task (extension "scheduler" 6.2.0). Now the point is: steps of each operation must be done one after the other, not parallel. That means: at start the scheduler task should find next "free" operation, lock it and handle it.
For locking purposes database table with operations has an integer column "isLocked". So I wanted to use following SQL statement to lock an operation:
$lockID = time();
'UPDATE operations SET isLocked = '.$lockID.' WHERE isLocked = 0 AND uid = '.$freeOperationFound->getUid().';'
After this SQL command I wanted to check if lock was set:
$repository->findOneByIsLocked($lockID);
If locking was successful operation step handling can start.
If meanwhile another instance of scheduler task locks this operation, the SQL statement above does nothing because of condition: WHERE isLocked = 0.
The problem is: Extbase ignores SQL UPDATE-statements.
If I just update the free operation object via repository the lock of another task instance can be overwritten. I need some kind of "conditional" update.
I think I got it: $GLOBALS['TYPO3_DB']->exec_UPDATEquery is the answer.
The only question remaining is, if this method is also depricated in FLOW, like $query->statement of Repository.
While the exec_UPDATEquery function from the DatabaseConnection class certainly gets the job done, here is the solution via extbase. It might make more sense if you need to work with the Operation object after you lock it.
$persistenceManager = GeneralUtilities::makeInstance('TYPO3\CMS\extbase\Persistence\PersistenceManager');
$freeOperation = $repository->findOneByIsLocked(0);
$freeOperation->setIsLocked(time());
$repository->update($freeOperation);
$persistenceManager->persistAll();
$freeOperation->myOperation();
$freeOperation->myOtherOperation();
$freeOperation->setIsLocked(0);
$repository->update($freeOperation);
$persistenceManager->persistAll();
The reason why you need to persist manually is, that your task is not within the context of a ActionController Action. And even if you were, it wouldn't automatically persist your changes until the end of the Action. Doing it through extbase might be the safer option because you can be sure to actually work on the exact same operation that you have just locked.
I know about prepared transaction in Postgres, but seems you can just commit or rollback it later. You cannot even view the transaction's db state before you've committed it. Is any way to save transaction for later use?
What I want to achieve actually is a preview (and correcting) of some changes in db (changes are imports from csv file, so user need to see preview before apply it). I want to make changes, add some changes later, see full state of db and apply it (certainly, commit transaction)
I cannot find a very good reference in docs, but I have a very strong feeling that the answer is: No, you cannot do that.
It would mean that when you "save" the transaction, the database would basically have to maintain all of its locks in place for an indefinite amount of time. Even if it was possible, it would mean horrible failure modes and trouble on all fronts.
For the pattern that you are describing, I would use two separate transactions. Import to a staging table and show that to user (or import to the main table but mark rows as "unapproved"). If user approves, in another transactions move or update these rows.
You can always end up in a situation where user can simply leave or crash without clicking "OK" or "Cancel". If what you're describing was possible, you would end up with a hung transaction holding all these resources. In my proposed solution you end up with wasteful rows in "staging" table that you may still show to user later or remove.
You may want to read up on persistence saga. This is actually a very simple example of a well known and researched problem.
To make the long story short, this pattern breaks down a long-running process like yours into smaller operations that are applied and persisted in some way in separate transactions. If any of them happens to fail (or does not occur as expected), you have compensating actions that usually undo what the steps executed so far have done (e.g. by throwing away stale/irrelevant data).
Here's a decent introduction:
https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/#:~:text=The%20SAGA%20Pattern,completion%20of%20the%20previous%20one.
http://vasters.com/clemensv/2012/09/01/Sagas.aspx
This concept was formally introduced in the 80s, but is well alive and relevant today.
I am looking at the CouchBase Dev Guide and trying to understand how Two Phase Commits work. I feel like the code example they give differs from the diagram.
Code Example
They link to a Ruby Gist that describes how you would do this transferring points from one account to another.
My understanding is that they go down the following route:
Change transaction state to pending
Update the first account by altering points and adding reference to transaction
Update the second account by altering points and adding reference to transaction
Change transaction state to committed
Remove reference of transaction from first account
Remove reference of transaction from second account
Change the transaction state to done
In this example, if there was a failure between steps 2 and 3, we could rollback by reversing the changed to points to any account which has a reference to a transaction.
Diagram Example
This is the diagram given to explain two phase locking; I think it disagrees with the code example...
The diagram seems to say that you add a reference to transaction to both accounts, then you add/remove points from both accounts.
In this example, if there was a failure between steps 3 and 4, how would you know what to roll back? How would you know if you had applied the changed to points or not?
Is the diagram wrong?
Yeah, you are right. The diagram step 3 should also apply balance changes at the same time as adding transaction to the list. Basically adding transaction to the list is just a sign that some changes were applied to the balance.
btw in the link to original gist you can find more complete executable solution, with rollback code: https://gist.github.com/avsej/3136027
I have a problem with existing database code (a trigger) that call a function trigger that use the NOTIFY command, which is not supported in the context of a prepared transaction.
My question is simple : from the function trigger, is there a way to detect that we are in the context of a prepared transaction ?
Thanks in advance.
There is no way to detect that the current transaction will be committed using prepared transactions and two-phase commit, because you haven't PREPAREd the transaction yet; the transaction has no idea it's going to be subjected to two-phase commit until after your trigger runs. PostgreSQL doesn't require that you BEGIN TRANSACTION FOR TWO PHASE COMMIT (imaginary syntax) or anything like that.
You can test for max_prepared_transactions > 0 in pg_settings to see if prepared transactions are enabled, but there's no way to know if 2PC will be used until it happens.