How to cancel or terminate long-running query in DB2 CLP without killing CLP? - db2

Assuming I started execution of an inefficient and long-running query in an IBM DB2 database using the CLP. Maybe there are some terrible joins in there that take a lot of processing time in the database. Maybe I don't have the much needed index yet.
# db2 +c -m
db2 => connect to mydb
db2 => select * from view_with_inefficient_long_running_query
//// CLP waiting for database response
How do I cancel the processing of this statement/query without killing DB2 CLP? I can do so by killing it, that is, by pressing Ctrl-C:
db2 => select * from view_with_inefficient_long_running_query
^C
SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
# db2 +c -m
db2 =>
But is there another more elegant way? Perhaps another Ctrl- shortcut? I already saw this question, but it doesn't talk about what I want.

AFAIK, there is not CTRL- to terminate CLP. Your option is to open another terminal session and use the LIST/FORCE applications, CTRL-Z the CLP to suspend it and use another CLP to LIST/FORCE or use a GUI tool like Data Server Manager to find the application and force it to terminate.
db2 list applications for <database>
get the application handle for the session(s) you want to terminate.
db2 force application ( application-handle )
see LIST APPLICATIONS and FORCE APPLICATION.

Would you rather use a QueryTimeout configuration as in https://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0008809.html
This way your command would stop and report SQLSTATE 57013
-913 UNSUCCESSFUL EXECUTION CAUSED BY DEADLOCK OR TIMEOUT.

Rather than use CLP directly, why not go directly from the UNIX prompt?
db2 "select * from view_with_inefficient_long_running_query"
You can hit Ctrl-C to cancel your query. The connection to the database is maintained by the DB2 Backend Process (db2bp), and you get all of the benefits of working in a UNIX shell – superior history, command pipelines, etc.

Related

Start transaction automatically on psql login

I'm wondering if it's possible to have psql start a transaction automatically when I open a psql session on the command line. I know I can start a transaction manually using 'BEGIN;' but I'm wondering if that can be done automatically without me typing in 'BEGIN;' manually on the command line.
Thanks!
I did a google search but that didn't come up with any good results.
You cannot have psql start a transaction when you login, but you can have it start a transaction with the first SQL statement you enter. For that, put a .psqlrc file into your home directory and give it the following content:
\set AUTOCOMMIT off
Note that that is a very bad idea (in my personal opinion). You are running the risk to inadvertently start a transaction that holds locks and blocks the progress of autovacuum. I have seen more than one PostgreSQL instance that suffered serious damage because of administrators who disabled autocommit in their interactive clients and kept transactions open. At the very least, add the following to your .psqlrc:
SET idle_in_transaction_session_timeout = '1min';

Datagrip: Create postgres index without waiting for execution

Is there a way to submit a command in datagrip to a database without keeping the connection open / asynchronously? I'm attempting to create indexes concurrently, but I'd also like to close my laptop.
My datagrip workflow:
Select column in a database, click 'modify column', and eventually run code such as:
create index concurrently batchdisbursements_updated_index
on de_testing.batchdisbursements (updated);
However, these run as background tasks and cancel if I exit datagrip.
However, these run as background tasks and cancel if I exit datagrip.
What if you close your laptop without exiting datagrip? Datagrip is probably actively sending a cancellation message to PostgreSQL when you exit it. If you just close the laptop, I doubt it will do that. In that case, PostgreSQL won't notice the client has gone away until it tries to send a message, at which point the index creation should already be done and committed.
But this is a fragile plan. I would ssh to the server, run screen (or one of the fancier variants), run psql in that, and create the indexes from there.

What does **$$** mean when configuring PowerShell Command Line for DB2?

I found this article that shows how you can set up PowerShell to act as your command line for processing DB2 commands.
In the article, it says that you can use the following command to configure PowerShell to run DB2 commands:
Set-Item -Path env:DB2CLP -Value "**$$**"
In the above command, what does the "**$$**" mean?
Thanks!
It has a function, as distinct from a meaning, and the **??** is meant for the Db2 clp (db2.exe). Even if you are not using PowerShell (i.e. you are using db2cmd.exe or cmd.exe) this environment variable can be useful.
It tells the Db2 CLP to configure the current PowerShell session to be able to communicate with the background process db2bp.exe (the communication is IPC based) . Such communication is necessary because it is that background process db2bp.exe which maintains your connection to the database when you run db2 connect to $your_database, or equivalent cmdlet. The db2.exe manages the db2bp.exe so you don't have to worry about it.
The Db2 CLP knows which db2bp.exe it starts for your Powershell session and uses the environment variable DB2CLP as part of that.
Each individual db2 ... command line (or cmdlet) may quickly complete , and will act on the currently connected database, and you can run many db2 commands one after the other, or run scripts - but all the time, it is the background task db2bp.exe that keeps your Db2 connection alive without needing to be reconnected (as long as the Db2 server does not itself end or kill the connection).
The db2bp.exe process will disappear when you run db2 terminate or end the process. You need to run db2 terminate when reconfiguring the node directory, or database directory, or when switching between different Db2-instances that are running on the same hostname, or optionally after db2 connect reset.

Postgres: how to start a procedure right after database start?

I have dozens of unlogged tables, and doc says that an unlogged table is automatically truncated after a crash or unclean shutdown.
Based on that, I need to check some tables after database starts to see if they are "empty" and do something about it.
So in short words, I need to execute a procedure, right after database is started.
How the best way to do it?
PS: I'm running Postgres 9.1 on Ubuntu 12.04 server.
There is no such feature available (at time of writing, latest version was PostgreSQL 9.2). Your only options are:
Start a script from the PostgreSQL init script that polls the database and when the DB is ready locks the tables and populates them;
Modify the startup script to use pg_ctl start -w and invoke your script as soon as pg_ctl returns; this has the same race condition but avoids the need to poll.
Teach your application to run a test whenever it opens a new pooled connection to detect this condition, lock the tables, and populate them; or
Don't use unlogged tables for this task if your application can't cope with them being empty when it opens a new connection
There's been discussion of connect-time hooks on pgsql-hackers but no viable implementation has been posted and merged.
It's possible you could do something like this with PostgreSQL bgworkers, but it'd be a LOT harder than simply polling the DB from a script.
Postgres now has pg_isready for determining if the database is ready.
https://www.postgresql.org/docs/11/app-pg-isready.html

Program cannot reconnect to Firebird after abnormal termination

What can be done to prevent having to restart a PC after a program (C++Builder) terminated abnormaly without closing the database using firebird 2?
What I am looking for: I would like to be able to just restart the program without any other intervention. (I could have the user call a batch file executing some cleanup or add some lines of code to the program to disconnect everything.)
If your database is firebird 2.1+, there are monitoring tables that show the active connections, and the sysdba can manually delete any left-over connnections.
If you look in your release notes, the syntax details should be there.