What happens when you get disconnected from a database during a transaction? - postgresql

I was connected via ssh to a remote Postgres DB.
I was deleting all the rows of a table using SQLAlchemy
db_session.query(TableName).delete()
but before the deletion was completed I got disconnected from the internet.
Now delete seems broken.
All the rows are still there, and I can query them, but when I run the above command it begins an endless waiting.
Maybe the db is still waiting for the previous transaction to terminate?
What can I do?
I hope someone can help me. Thank you.

Related

Why are there always at least 10 sessions for a postgreSQL database? Why can't they be terminated?

Original aim: rename a database using ALTER DATABASE via psql.
Problem: rename fails due to other sessions accessing target database. ・All terminals/applications I am aware of have been closed.
・querying pg_stat_activity shows that there are 10 processes(=sessions?) accessing the db.・The username for each session is the same user I have been using for psql and for some local phoenix and django apps. The client_addr is also local host for all of them.
・When I use pg_terminate_backend, on any of the pids, another process gets immediately spawned.
・After restarting my pc, 10 processes are again spawned.
Concern: As I can't account for these 10 processes that I can't get rid of, I think I'm misunderstanding how postgres works somewhere.
Question: Why do 10 session/processes connected to a particular one of my databases, and why can't I terminate them using pg_terminate_backend?
Note: In the phoenix project I set up recently, I set the and set the pool_size of the Repo config to 10 - which makes me think it's related...but I'm pretty sure that project isn't running in any way.
Update - Solved
As a_horse_with_no_name suggested, the by doing the following I was able to put a stop to the 10 mystery sessions.
(1) prevent login of user responsible for the sessions (identifiable by querying `pg_stat_activity`), by doing `alter user .... with nologin`
(2)-running pg_terminate_backend on each of the session's pids.
After those steps I was able to change the table name.
The remaining puzzle is, how did those sessions get in that status in the first place... from the contents of pg_stat_activity, the wait_event value for each was clientRead.
From this post, it seems that the application may have been forcibly stopped halfway through a transaction or something, leaving postgres hanging.

If I stop a long running postgresql DELETE USING query , does it rollback?

I have a long running 319h postgresql DELETE USING query. I think at this point that the query should be terminated. What do you think are the risks of such operation and if it would rollback.
Thanks.
If it hasn't been committed, then there are no issues. Maybe a hung transaction that has to be killed from the DBA side.

stopping a postgres infinite recursive query

i have written a wrong infinite-loop recursive query as i have been learning recursive postgres queries and affected the whole server postgres server
every database request takes inifinitely long time, giving me timeouts. so i have tried doing /etc/init.d/postgresql restart but while it seems to have helped a little, the postgres queries still run slowly
are the unfinished queries cached even after the postgtres server restart? how could i escape this sticky situation? :(
thank you ammoQ and Tometzky for your answers, it ended up the postgres process was cleaning up a lot of stuff. it gets back to normal after a couple of minutes

Mongodb interrupted while reindexing

I have a collection with about 3,000,000 entries that I need to reindex. This whole thing began when I tried to add a 2d index. To do this, I created an ssh tunnel, opened the mongo shell and tried to use ensureIndex. I'm in a place with a somewhat unreliable internet connection, and an hour in it ended up breaking the pipe. I then tunneled back in, opened the mongo shell and tried to look at the number of indexes using getIndexes; the new index I created showed up, but I wasn't confident it had finished, so I decided to use reIndex. In retrospect, this was stupid. The pipe broke again. Now when I open the shell and try to issue getIndexes, the shell doesn't respond.
So what should I do? Do I need to repair my database? Can I issue reIndex when I have a more reliable internet connection? Is there a way to issue reIndex without keeping the shell open, but without doing it in the background and having it take eons? (I'll check the mongod shell options to see if I can find anything, then check the node.js mongo api so I can try running something as a service on server)
And also, if I end up running reIndex as a service on the server, is there any way to check if it's working? The most frustrating part of this right now is I have no idea if my database is ok, if reIndex is still running, etc. Any help would be much appreciated. Thanks.
You don't have a problem. Mongo runs commands and only stops them if you explicitly kill the operation (db.killOp()).
You do not need to wait for the index operation to finish!
Regarding the connection problems, try using the screen command.
It enables you to create a "persistent" screen - not in the way of disk persistence, but in the means of connection-loss.

Doing pg_dump while still many active transaction

As subjects, what will happen to the backup file while there is still many active transaction in the database. Does it export realtime or just partially backups ?
thanks in advance.
pg_dump runs in a serializable transaction, so it sees a consistent snapshot of the database including system catalogs.
However it is possible to get 'cache lookup failed' error if someone performs DDL changes while a dump is starting. The time window for this sort of thing isn't very large, but it can happen. See: http://archives.postgresql.org/pgsql-bugs/2010-02/msg00187.php
pg_dump will give you a consistent state. Any transaction not completed before pg_dump has been issued will not be reflected.