In DB2 SQL, how can I kill a recursive function call that's already stuck in an infinite loop..? - db2

If I have a DB2 SQL procedure or function that's recursive, or it has a tricky loop, and the process has become infinite, how can I kill it if it's already running..? This is DB2-for-i v7.3, with development in DBeaver SQL.
Is it enough to simply cancel the query in the SQL IDE..? I've done so in the past with too-long queries, and I always get a notification for "Query has been cancelled". But with an infinitely looping or recursing procedure, is there any risk the process might still continue in the background until something crashes and the DBA staff come to pound on my door with pitchforks and flaming torches..?
EDIT: It was suggested my question is a duplicate of this one, but I read that one before my post, and it's not the same. That question is how to prevent an infinite loop from happening in the first place, but my question is about how to kill one that's already happening.

Cancel should work, but if you can find the right job in WRKACTJOB, then you can just end it with OPTION(*IMMED).
The server jobs should be in subsystem QUSRWRK, and are named QZDASOINIT. The one you are looking for will have your userid, and if it is looping, it will have a status of RUN. These jobs sit at TIMW normally when they are waiting for work. These are Prestart Jobs, and if you end one manually, it will come back if it needs to.

Build a cancel into the loop by some means to exit a long running loop.
--for example checking if any rows exist in a table.
loop...
set myexit = (select myexit from myexit fetch first 1 rows only);
if myexit is not null then return;
endloop...
Maybe some better solutions come of this.

Related

How to investigate time required to obtain lock - and why - within a procedure

I am stumped on an issue I am having. The true context is rather complicated, but I can boil it down to these functional points (everything else is not related to the problematic table):
I have a trigger function that contains several SELECTs and then an UPDATE
The update takes an unreasonable amount of time to execute ("unreasonable" = > 1.4s)
The same exact queries when run outside the trigger (for the same rows, parameters, etc.) do not have any issues (i.e. they execute in under 1-2ms)
I am pretty sure that indexes, etc., are working as necessary; i.e. there shouldn't be any issues.
There are no circular triggers
There is on trigger on the destination table, but even with that removed the behavior is the same.
I have done many tests to no avail, but these are pretty meaningful:
when the update is replaced with a SELECT, the response time is fast, as expected
when the update is replaced with a SELECT... FOR UPDATE, the response time is slow, the same as the update
^ this (as well as other things) has led me to possibly believe that the delay is spent waiting to achieve a lock
No other transactions are really happening on that table. I am truly bewildered.
Server context: This is being run in AWS/RDS on db.m5.xlarge.
What I am looking for is whether there is a way to get some information about locks that are happening mid-transaction or possibly even a history of acquired locks? Or anything else that can give me insight into what is causing the delay that seems so closely related to acquiring a lock on that table.
Unfortunately, just to make everything even more frustrating, I cannot replicate the issue when I attempt to use EXPLAIN in the function body. The only way to do this (that I know of) is to use the EXECUTE... syntax with a query string. That doesn't have a delay - its also useless for the trigger.

Rollback in Postgres

As far as I know, we can't use start transaction within functions, thus we can't use COMMIT and ROLLBACK in functions.
But how then we ROLLBACK by some if-condition?
How then we can perform a sequence of statements in a specific level of isolation? I mean a situation when an application wants to call a SQL (plpgsql) function and that function really needs to be run in a transaction with a certain isolation level. What to do in such a case?
In which cases then it is really practical to run ROLLBACK? Only when we manually write a script, check something and then ROLLBACK manually if we don't like the result. And in the same case, I see the practicality of savepoints. However, I feel like it is a serious constraint.
If you want to rollback the complete transaction, RAISE an exception.
If you only want to roll back part of your work, start a new block with a BEGIN at the point to which you want to roll back and add an EXCEPTION clause to the block.
Since the transaction is started outside the function, the isolation level already has to be set properly when you are in the function.
You can query
SELECT current_setting('transaction_isolation', TRUE);
and throw an error if the setting is not correct.
is too general or too simple to answer.
You roll back a transaction if you have reached a point in your processing where you want to undo everything you have done so far in the transaction.
Often, that happens implicitly rather than explicitly by throwing an error.

PostgreSQL - how to determine whether a transaction is active?

Let me open by saying: yes, I am aware of Determine if a transaction is active (Postgres)
Unfortunately the sole answer to that question is far too specific to the use case provided, and doesn't actually indicate whether or not a transaction is active.
The select txid_current(); trick suggested by How to check for pending operations in a PostgreSQL transaction doesn't appear to work - I always get the same transaction ID from adjacent calls to that function. Possibly this is because I'm trying to test it from pgAdmin, which is transparently starting transactions...? (Note: I don't actually care whether there are any pending changes or active locks, so looking at pg_locks isn't helpful - what if nothing's been touched since the transaction was started?)
So: How can I determine in PostgreSQL PL/pgSQL code if a transaction is currently active?
One possible use case is: the SP/FN in question will be doing its own explicit transaction management, and calling it with a transaction already active will greatly interfere with that. I want to raise an error so that the coding mistake of calling this SP/FN in a transaction can be corrected.
There are other use cases, though.
Ideally what I'm looking for is an equivalent to MSSQL's ##TRANCOUNT (though I don't really care how deeply the transactions may be nested...)
Postgres runs PL/pgSQL inside the transaction. Thus you can't control transaction from inside PL/pgSQL. Code will look like:
begin;
select plpgsql_fn();
do '/*same any plpgsql*/';
end;
So answering your question:
If you have PL/pgSQL running ATM, you have your transaction active ATM...
Of course you can do some trick, like starting/ending work over dblink or such. but then you can check select txid_current(); over the dblink successfully...
If you want to determine if there have been any data modifications in your transaction, call txid_current_if_assigned(). It returns NULL if nothing has been modified yet.
If you only want to know if you are inside some transaction, you can save yourself the trouble, because you always are.
Before PostgreSQL v11, you cannot use transaction control statements in a function.
I haven't found a clean way to do that, but you can always call BEGIN and if it succeeds it means there is no transaction in progress (don't forget to rollback). If it fails with "there is already a transaction in progress" this means you are within transaction (better not to rollback then).

Executing a trigger AFTER the completion of a transaction

In PostgreSQL, are DEFERRED triggers executed before (within) the completion of the transaction or just after it?
The documentation says:
DEFERRABLE
NOT DEFERRABLE
This controls whether the constraint can be deferred. A constraint
that is not deferrable will be checked immediately after every
command. Checking of constraints that are deferrable can be postponed
until the end of the transaction (using the SET CONSTRAINTS command).
It doesn't specify if it is still inside the transaction or out. My personal experience says that it is inside the transaction and I need it to be outside!
Are DEFERRED (or INITIALLY DEFERRED) triggers executed inside of the transaction? And if they are, how can I postpone their execution to the time when the transaction is completed?
To give you a hint what I'm after, I'm using pg_notify and RabbitMQ (PostgreSQL LISTEN Exchange) to send out messages. I process such messages in an external application. Right now I have a trigger which notifies the external app of the newly inserted records by including the record's id in the message. But in a non-deterministic way, once in a while, when I try to select a record by its id at hand, the record can not be found. That's because the transaction is not complete yet and the record is not actually added to the table. If I can only postpone the execution of the trigger for after the completion of the transaction, everything will work out.
In order to get better answers let me explain the situation even closer to the real world. The actual scenario is a little more complicated than what I explained before. The source code can be found here if anyone's interested. Becuase of reasons that I'm not gonna dig into, I have to send the notification from another database so the notification is actually sent like:
PERFORM * FROM dblink('hq','SELECT pg_notify(''' || channel || ''', ''' || payload || ''')');
Which I'm sure makes the whole situation much more complicated.
Triggers (including all sorts of deferred triggers) fire inside the transaction.
But that is not the problem here, because notifications are delivered between transactions anyway.
The manual on NOTIFY:
NOTIFY interacts with SQL transactions in some important ways.
Firstly, if a NOTIFY is executed inside a transaction, the notify
events are not delivered until and unless the transaction is
committed. This is appropriate, since if the transaction is aborted,
all the commands within it have had no effect, including NOTIFY. But
it can be disconcerting if one is expecting the notification events to
be delivered immediately. Secondly, if a listening session receives a
notification signal while it is within a transaction, the notification
event will not be delivered to its connected client until just after
the transaction is completed (either committed or aborted). Again, the
reasoning is that if a notification were delivered within a
transaction that was later aborted, one would want the notification to
be undone somehow — but the server cannot "take back" a notification
once it has sent it to the client. So notification events are only
delivered between transactions. The upshot of this is that
applications using NOTIFY for real-time signaling should try to keep
their transactions short.
Bold emphasis mine.
pg_notify() is just a convenient wrapper function for the SQL NOTIFY command.
If some rows cannot be found after a notification has been received, there must be a different cause! Go find it. Likely candidates:
Concurrent transactions interfering
Triggers doing something more or different than you think they do.
All sorts of programming errors.
Either way, like the manual suggests, keep transactions that send notifications short.
dblink
Update: Transaction control in a PROCEDURE or DO statement in Postgres 11 or later makes this a lot simpler. Just COMMIT; to (also) send waiting notifications.
Original answer (mostly for Postgres 10 or older):
PERFORM * FROM dblink('hq','SELECT pg_notify(''' || channel || ''', ''' || payload || ''')');
... which should be rewritten with format() to simplify and make the syntax secure:
PRERFORM dblink('hq', format('NOTIFY %I, %L', channel, payload));
dblink is a game-changer here, because it opens a separate transaction in the other database. This is sometimes used to fake autonomous transaction.
Does Postgres support nested or autonomous transactions?
How do I do large non-blocking updates in PostgreSQL?
dblink() waits for the remote command to finish. So the remote transaction will most probably commit first. The manual:
The function returns the row(s) produced by the query.
If you can send notification from the same transaction instead, that would be a clean solution.
Workaround for dblink
If notifications have to be sent from a different transaction, there is a workaround with dblink_send_query():
dblink_send_query sends a query to be executed asynchronously, that is, without immediately waiting for the result.
DO -- or plpgsql function
$$
BEGIN
-- do stuff
PERFORM dblink_connect ('hq', 'your_connstr_or_foreign_server_here');
PERFORM dblink_send_query('con1', format('SELECT pg_sleep(3); NOTIFY %I, %L ', 'Channel', 'payload'));
PERFORM dblink_disconnect('con1');
END
$$;
If you do this right before the end of the transaction, your local transaction gets 3 seconds (pg_sleep(3)) head start to commit. Chose an appropriate number of seconds.
There is an inherent uncertainty to this approach, since you get no error message if anything goes wrong. For a secure solution you need a different design. After successfully sending the command, chances for it to still fail are extremely slim, though. The chance that successful notifications are missed seem much higher, but that's built into your current solution already.
Safe alternative
A safer alternative would be to write to a queue table and poll it like discussed in #Bohemian's answer. This related answer demonstrates how to poll safely:
Postgres UPDATE … LIMIT 1
I'm posting this as an answer, assuming the actual problem you are trying to solve is deferring execution of an external process until after the transaction is completed (rather than the X-Y "problem" you're trying to solve using trigger Kung Fu).
Having the database tell an app to do something is a broken pattern. It's broken because:
There's no fallback if the app doesn't get the message, eg because it's down, network explodes, whatever. Even the app replying with an acknowledgment (which it can't), wouldn't fix this problem (see next point)
There's no sensible way to retry the work if the app gets the message but fails to complete it (for any of lots of reasons)
In contrast, using the database as a persistant queue, and having the app poll it for work, and take the work off the queue when work is complete, has none of the above problems.
There are lots of ways to achieve this. The one I prefer is to have some process (usually trigger on insert, update and delete) put data into a "queue" table. Have another process poll that table for work to do, and delete from the table when work is complete.
It also adds some other benefits:
The production and consumption of work is decoupled, which means you can safely kill and restart your app (which must happen from time to time, eg deploying) - the queue table will happily grow while the app is down, and will drain when the app is back up. You can even replace the app with an entirely new one
If for whatever reason you want to initiate processing of certain items, you can just manually insert rows into the queue table. I used this technique myself to initiate the processing of all items in a database that needed initialising by being put on the queue once. Importantly, I didn't need to do a perfunctory update to every row just to fire the trigger
Getting to your question, a slight delay can be introduced by adding a timestamp column to the queue table and having the poll query only select rows that are older than (say) 1 second, which gives the database time to complete its transaction
You can't overload the app. The app will read only as much work as it can handle. If your queue is growing, you need a faster app, or more apps If multiple consumers are operating, concurrency can be solved by (for example) adding a "token" column to the queue table
Queues that are backed by database tables is the basis of how persistent queues are implemented in commercial grade queue-based platforms, so the pattern is well tested, used and understood.
Leave the database to do what it does best, and the only thing it does well: Manage data. Don't try to make your database server into an app server.

tsql query kill scenario

In our webapp, we have lots of queries running. Most of them reading data but some update queries with high priority might come. Since, we'd like to cancel read queries but when using KILL, I'd like the read query to return certain dataset result or execution result upon receiving cancel.
My intention is to mimic the behavior of signal in C programs for which a signal handler is invoked upon receiving a kill signal.
Is there any method to define an asynchrnous KILL signal handler for SPs?
This is not a fully tested answer. But it is a bit more than just a comment.
One is to have dirty read (with nolock).
This part is tested I do this all time.
Build a large scalable app you need to resort to this and manage it.
A dirty read will not block an update.
You can get that - a dirty read.
A lot of people think a dirty read may get corrupt data.
If you are updating smith to johnson the select is not going to get smison.
The select is going to get smith and it will be immediately stale.
But how is that worse then taking a read lock?
The read get smith and blocks the update.
Once the read locks are cleared it is updated.
I would contend that blocking an update is also stale data.
If you are using reader I think you could pass the same cancellation token to each select and then just cancel the one token.
But if may not process the CancellationToken until it read the row so it may not cancel a query a long running query that has not yet returned any rows.
DbDataReader.ReadAsync Method (CancellationToken)
Or if you are not using reader look at
SqlCommand.Cancel
As far as getting cancel to return alternate data. I doubt SQL is going to do that.