Datagrip: Create postgres index without waiting for execution - postgresql

Is there a way to submit a command in datagrip to a database without keeping the connection open / asynchronously? I'm attempting to create indexes concurrently, but I'd also like to close my laptop.
My datagrip workflow:
Select column in a database, click 'modify column', and eventually run code such as:
create index concurrently batchdisbursements_updated_index
on de_testing.batchdisbursements (updated);
However, these run as background tasks and cancel if I exit datagrip.

However, these run as background tasks and cancel if I exit datagrip.
What if you close your laptop without exiting datagrip? Datagrip is probably actively sending a cancellation message to PostgreSQL when you exit it. If you just close the laptop, I doubt it will do that. In that case, PostgreSQL won't notice the client has gone away until it tries to send a message, at which point the index creation should already be done and committed.
But this is a fragile plan. I would ssh to the server, run screen (or one of the fancier variants), run psql in that, and create the indexes from there.

Related

Start transaction automatically on psql login

I'm wondering if it's possible to have psql start a transaction automatically when I open a psql session on the command line. I know I can start a transaction manually using 'BEGIN;' but I'm wondering if that can be done automatically without me typing in 'BEGIN;' manually on the command line.
Thanks!
I did a google search but that didn't come up with any good results.
You cannot have psql start a transaction when you login, but you can have it start a transaction with the first SQL statement you enter. For that, put a .psqlrc file into your home directory and give it the following content:
\set AUTOCOMMIT off
Note that that is a very bad idea (in my personal opinion). You are running the risk to inadvertently start a transaction that holds locks and blocks the progress of autovacuum. I have seen more than one PostgreSQL instance that suffered serious damage because of administrators who disabled autocommit in their interactive clients and kept transactions open. At the very least, add the following to your .psqlrc:
SET idle_in_transaction_session_timeout = '1min';

How to determine in PgAdmnin if a database is completely restored?

When PgAdmin III displays a list of databases, a database in the middle of restoring looks just like any other one. How can I determine if the restore has completed or not?
If by restore you mean pg_restore command in progress you cannot see that directly from pgAdmin. What pg_restore does in fact is execute simple CREATE TABLE, INSERT or COPY commands that differ in no way from normal commands. What you can do is you can open the Server status window. If you know where the command is executed (IP address) or if there is nothing else connecting to the database you can check if there are open connections to the database. If there are no open connections the restore has finished. If you can't deduce the info from connections you could look if there are any transactions (no transactions for some time = restore finished).
It would be simpler to get this information if you had access to the place where the command is executed.

Can I execute a stored procedure 'detached' (not keeping DB connection open) on PostgreSQL?

I want to execute a long-running stored procedure on PostgreSQL 9.3. Our database server is (for the sake of this question) guaranteed to be running stable, but the machine calling the stored procedure can be shut down at any second (Heroku dynos get cycled every 24h).
Is there a way to run the stored procedure 'detached' on PostgreSQL? I do not care about its output. Can I run it asynchronously and then let the database server keep working on it while I close my database connection?
We're using Python and the psycopg2 driver, but I don't care so much about the implementation itself. If the possibility exists, I can figure out how to call it.
I found notes on the asynchronous support and the aiopg library and I'm wondering if there's something in those I could possibly use.
No, you can't run a function that keeps on running after the connection you started it from terminates. When the PostgreSQL server notices that the connection has dropped, it will terminate the function and roll back the open transaction.
With PostgreSQL 9.3 or 9.4 it'd be possible to write a simple background worker to run procedures for you via a queue table, but this requires the ability to compile and install new C extensions into the server - something you can't do on Heroku.
Try to reorganize your function into smaller units of work that can be completed individually. Huge, long-running functions are problematic for other reasons, and should be avoided even if unstable connections aren't a problem.

Postgres: how to start a procedure right after database start?

I have dozens of unlogged tables, and doc says that an unlogged table is automatically truncated after a crash or unclean shutdown.
Based on that, I need to check some tables after database starts to see if they are "empty" and do something about it.
So in short words, I need to execute a procedure, right after database is started.
How the best way to do it?
PS: I'm running Postgres 9.1 on Ubuntu 12.04 server.
There is no such feature available (at time of writing, latest version was PostgreSQL 9.2). Your only options are:
Start a script from the PostgreSQL init script that polls the database and when the DB is ready locks the tables and populates them;
Modify the startup script to use pg_ctl start -w and invoke your script as soon as pg_ctl returns; this has the same race condition but avoids the need to poll.
Teach your application to run a test whenever it opens a new pooled connection to detect this condition, lock the tables, and populate them; or
Don't use unlogged tables for this task if your application can't cope with them being empty when it opens a new connection
There's been discussion of connect-time hooks on pgsql-hackers but no viable implementation has been posted and merged.
It's possible you could do something like this with PostgreSQL bgworkers, but it'd be a LOT harder than simply polling the DB from a script.
Postgres now has pg_isready for determining if the database is ready.
https://www.postgresql.org/docs/11/app-pg-isready.html

Program cannot reconnect to Firebird after abnormal termination

What can be done to prevent having to restart a PC after a program (C++Builder) terminated abnormaly without closing the database using firebird 2?
What I am looking for: I would like to be able to just restart the program without any other intervention. (I could have the user call a batch file executing some cleanup or add some lines of code to the program to disconnect everything.)
If your database is firebird 2.1+, there are monitoring tables that show the active connections, and the sysdba can manually delete any left-over connnections.
If you look in your release notes, the syntax details should be there.