Is my PostgreSQL query still running even if server closed the connection? - postgresql

Postgres noob here. I have a very long postgresql query running an update on about ~3 million rows. I did this via psql and after about the second hour I got this message:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
Is my query still running? I did run:
select *
from pg_stat_activity
where datname = 'mydb';
and I do still see a row with my update query, with the state = active, wait_event_type = IO, and wait_event = DataFileRead. Do I need to be worried that my connection closed out? Is my query still running, and is the best way to check for done-ness to keep checking up with
select *
from pg_stat_activity
where datname = 'mydb';
?

Your query will not succeed. Your client lost its connection, and while the backend server process that was handling your UPDATE is still going, it will notice that the client disconnected when it tries to return the query status upon completion, and abort the transaction (whether or not you had performed a BEGIN; every statement in PG is implicitly in a transaction even without BEGIN/COMMIT). You will need to re-issue the UPDATE.

Related

Go sql package, PostgreSQL and PgBouncer with behavior on retry

Let's imaging that we have PostgreSQL and PgBouncer (with Transaction mode).
Also we are planning to execute following transaction:
BEGIN;
UPDATE a ...;
UPDATE b ...;
SELECT c ...;
UPDATE d ...;
COMMIT;
When transaction begins, PgBouncer gives us connection.
Then we execute:
UPDATE a; -- successful
UPDATE b; -- successful
SELECT a; -- successful
UPDATE d; -- failed, because PgBouncer restarted.
Then we try to retry using go DB client
UPDATE d;
On the 3rd time we acquire connect and execute query. Will this query executed in the same transaction or it will be executed on the new connection and leads to inconsistent state?
Or every statement executes with some identifier which can say that it is related to some transaction?
I can't be 100% certain since I am not familiar with the internals of PgBouncer or Postgres but it stands to reason that the transaction is bound to a connection since transactions have no identification. So as long as the TCP/SQL connection is not restarted than you should be able to resume. But if any of the applications restart then the transaction is gone.

How can I get the process id for the same query I have executed on postgres

I have executed some query from my remote PostgreSQL database with the help of a simple jdbc java program for which I want process Id. Can anyone suggest how can I get the process id for the same query I have executed?
The process ID is assigned when you open the connection to the database server.
It's per connection, not an ID "per query"!
So before you run your actual query, you can run:
select pg_backend_pid();
to the get the PID assigned to your JDBC connection. Then you can e.g. log it or print it somehow so that you know it once your query is running.

PostgreSQL - how to unlock table record AFTER shutdown

Sorry I looked for this everywhere but cannot find a working solution :/
I badly needed this for abnormal testing.
What I'm trying to do here is:
Insert row in TABLE A
Lock this record
(At separate terminal) service postgresql-9.6 stop
Wait a few moments
(At separate terminal) service postgresql-9.6 start
"try" to unlock the record by executing "COMMIT;" in the same terminal as #2.
How i did #2 is like this:
BEGIN;
SELECT * FROM TABLE A WHERE X=Y FOR UPDATE;
Problem is that once I did #6, this error shows up:
DB=# commit;
FATAL: terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
So when I execute "COMMIT;" again, it only shows:
DB=# commit;
WARNING: there is no transaction in progress
COMMIT
Now the record cannot be unlocked.
I've tried getting the PID of that locking thing, and then execute pg_terminate (or cancel), but it just doesn't work.
DB=# select pg_class.relname,pg_locks.* from pg_class,pg_locks where pg_class.relfilenode=pg_locks.relation;
DB=# select pg_terminate_backend(2450);
FATAL: terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
DB=# select pg_cancel_backend(3417);
ERROR: canceling statement due to user request
Please help. Does anyone have any ideas? :/
..Or is this even possible?
My specs:
Postgresql-9.6
RedHat Linux
There's a fundamental misunderstanding or three here. Lock state is not persistent.
When you lock a record (or table), the lock is associated with the transaction that took the lock. The transaction is part of the running PostgreSQL session, your connection to the server.
Locks are released at the end of transactions.
Transactions end:
On explicit COMMIT or ROLLBACK;
When a session disconnects without an explicit COMMIT of the open transaction, triggering an implied ROLLBACK;
When the server shuts down, terminating all active sessions, again triggering an implied ROLLBACK of all in-progress transactions.
Thus, you have released the lock you took at step 2 when you shut the server down at step 3. The transaction that acquired that lock no longer exists because its session was terminated by server shutdown.
If examine pg_locks you'll see how the locked row is present before restart and vanishes after restart.

Postgres and PyCharm and hung transactions

I keep running into this issue where my postgresql database hangs because I didn't finish a transaction while debugging in PyCharm.
The log has several of these messages:
[16:30:40 PDT] unexpected EOF on client connection with an open transaction
Now the database is hung and I don't know how to recover from it other than shutting down the database (vagrant halt; vagrant up)
Is there any way to clear those stuck transactions so I don't have to go through stopping and restarting the database?
Thanks for any info
I found this solution here:
SELECT * FROM pg_stat_activity ORDER BY client_addr ASC, query_start ASC;
will list all your hung/idle transactions, then you can run
SELECT pg_terminate_backend(3592)
using the pid listed in the table.
and it is much faster than rebooting vagrant or postgresql

Connection lost after query runs for few minutes in PostgreSQL

I am using PostgreSQL 8.4 and PostGIS 1.5. What I'm trying to do is INSERT data from one table to another (but not strictly the same data). For each column, a few queries are run and there are a total of 50143 rows stored in the table. But the query is quite resource-heavy: after the query has run for a few minutes, the connection is lost. Its happening about 21-22k MS into the execution of the query, after which I have to start the DBMS manually again. How should I go about solving this issue?
The error message is as follows:
[Err] server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Additionally, here is the psql error log:
2013-07-03 05:33:06 AZOST HINT: In a moment you should be able to reconnect to the database and repeat your command.
2013-07-03 05:33:06 AZOST WARNING: terminating connection because of crash of another server process
2013-07-03 05:33:06 AZOST DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
My guess, reading your problem, is that you are hitting out of memory issues. Craig's suggestion to turn off overcommit is a good one. You may also need to reduce work_mem if this is a big query. This may slow down your query but it will free up memory. work_mem is per operation so a query can use many times that setting.
Another possibility is you are hitting some sort of bug in a C-language module in PostgreSQL. If this is the case, try updating to the latest version of PostGIS etc.