How to cancel PostgreSQL query? - postgresql

I am not very familiar with databases or SQL and wanted to make sure that I don't mess anything up. I did:
SELECT pid, state, usename, query FROM pg_stat_activity;
To check if I had any queries and there were several that had the state active. Do I just cancel them by doing:
select pg_cancel_backend(PID);
And this won't affect anything except the my queries, correct? I also wanted to figure out why those queries were still in the state active. I have a python file where I read in my sql file, but I stopped running the python file in the middle of reading my sql file. Is that possibly why it happened and why the states are still active?

Yes, this is what pg_cancel_backend(pid) is for. Why exactly the query is still running depends on a few things - could be waiting to grab a lock, or the query could just take a long time - but given the python processes that started the queries have exited, the connection is technically already closed, the PG backend process just hasn't noticed yet. It won't notice until the query completes and it tries to return the query status to the client, at which point it'll rollback the transaction when it sees the connection is no longer present.
The only effect pg_cancel_backend on the PIDs of those backends should have is to cause PG to notice the connection is closed immediately, rather than whenever the query completes.

Related

Postgres query idle after running for a few hours

I'm doing a huge upgrade on a Database. I have quite a complex query that needs a few hours to run - I tested it on some sample data and the query is fine.
After analyzing my queries, I saw that my query changed from state 'ACTIVE' to 'IDLE' after running for 3:30h.
What does that exactly mean? The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. I
Will my query end? Should I kill it and find a smarter way to upgrade?

SQL Query intermittently timing out, Any ALTER immediately fixes it

I have a pretty basic SQL Query I'm running (SQL Server 2016). I'm pulling about 15 columns from some simple inner and left joins on 8 tables that all have WITH (NOLOCK) on the join. My where clause is checking for a few string, uniqueidentifier, and bit values. The stored procedure has no calculations, no loops/cursors/case statements, etc.. It is very straightforward.
For some reason our application keeps intermittently freezing up because the SQL call is timing out. When I grab the call in profiler and run it in SSMS it runs in sub-second time, but from the application it just wont finish.
However, if I script out an ALTER command on the query and just add a blank line and execute the change, the problem goes away and the calls run instantaneously. The change I make is nothing substantive, just the act of changing the query in some way seems to unlock it.
Does anyone have any idea what could be causing this? I don't see any locks when it is timing out. I was thinking maybe a bad execution plan being cached?
The only other oddity is that this is running from an old legacy application, our last one in old classic asp. But it doesn't seem related to the web server or architecture, just the database.

Postgres returns errors on future transactions

I am currently migrating from MySQL to postgres using pgbouncer for my connection pool.
We select/insert/update/delete lots of data from postgres and all comes from remote sources so we try to make the data quality as good as possible before an insert but sometimes some bad data slips through.
This causes in postgres to report current transaction is aborted, commands ignored until end of transaction block
This is fine except that connection through pgbouncer will report this error for every query. I get the same logic if i connect directly to postgres instead of pgbouncer too. I'd expect it to roll back whichever transaction that caused this issue.
Is there a way to just rollback and continue working like normal? Everything i've read just says fix the query but in this case it's not always possible.
You need to use the ROLLBACK command. This will undo everything since the last BEGIN TRANSACTION or START TRANSACTION. Note that transactions do not nest; if you've begun multiple transactions without committing, this will roll back the outermost transaction.
This will drop you into autocommit mode. You may want to issue a new BEGIN TRANSACTION command to open a new transaction.
You should also be able to ROLLBACK TO SAVEPOINT, if you have a savepoint from before the error.
(If at all possible, it is preferred to just fix the query, but depending on what you're doing, that may be prohibitively difficult.)

Dropping index concurrently PostgreSQL 9.1

DROP INDEX CONCURRENTLY first appeared in PSQL 9.2, but my server runs 9.1. Unfortunately that operation locks my app for an unpredictable amount of time, that's a very sad fact when doing it on production.
Is there a way to drop an index concurrently?
No, there's no simple workaround - otherwise it's rather less likely that DROP INDEX CONCURRENTLY would've been added in 9.2.
However, you can kill all sessions to force the drop to occur promptly.
What you want to avoid is the drop waiting on a partially acquired exclusive lock that prevents other transactions from proceeding, but doesn't let it proceed either, while it waits for other transactions to finish and release their share locks. The best way to ensure that happens is to kill all concurrent sessions.
So, in one session:
DROP INDEX my_index;
In another session, as a superuser, terminate all other sessions using the following untested query, which you'll need to adapt appropriately and test before use:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE procpid <> (
SELECT pid
FROM pg_stat_activity
WHERE query = 'DROP INDEX my_index;')
AND
procpid <> pg_backend_pid();
Your well-written and well-tested application will immediately reconnect and retry its queries without bothering the user or getting upset, because it knows that transient errors are something it has to cope with so it runs all its database access in retry loops. If it isn't well written then you'll discover that with a flood of error user-visible messages. If it's really not well written you'll have to restart it before it gets its head together, but it's rare to see apps that are quite that broken.
This is a heavy handed approach. You can be rather softer about it by joining against pg_locks and only terminating sessions that actually hold a lock on the relation you're interested in or the index you wish to modify. You get to enjoy writing that query, because my interest in working around limitations of older database versions is limited.

SQL queries running slowly or stuck after DBCC DBReindex or Alter Index

All,
SQL 2005 sp3, database is about 70gb in size. Once in a while when I reindex all of my indexes in all of my tables, the front end seems to freeze up or run very slowly. These are queries coming from the front end, not stored procedures in sql server. The front end is using JTDS JDBC connection to access the SQL Server. If we stop and restart the web services sending the queries the problem seems to go away. It is my understandning that we have a connection pool in which we re-use connections and dont establish a new connection each time.
This problem does not happen every time we reindex. I have tried both ways with dbcc dbreindex and alter index online = on and sort in tempdb = on.
Any insight into why this problem occurs once in a while and how to prevent this problem would be very helpful.
Thanks in advance,
Gary Abbott
When this happens next time, look into sys.dm_exec_requests to see what is blocking the requests from the clients. The blocking_session_id will indicate who is blocking, and the wait_type and wait_resource will indicate what is blocking on. You can also use the Activity Monitor to the same effect.
On a pre-grown database an online index rebuild will not block normal activity 9select/insert/update/delete). The load on the server may increase as a result of the online index rebuild and this could result in overall slower responses, but should not cause blocking.
If the database is not pre-grown though then the extra allocations of the index rebuild will trigger database growth events, which can be very slow if left default at 10% increments and without instant file initialisation enabled. During a database growth event all activity is frozen in that database, and this may be your problem even if the indexes are rebuilt online. Again, Activity Monitor and sys.dm_exec_requests would both clearly show this as happening.