stopping a postgres infinite recursive query - postgresql

i have written a wrong infinite-loop recursive query as i have been learning recursive postgres queries and affected the whole server postgres server
every database request takes inifinitely long time, giving me timeouts. so i have tried doing /etc/init.d/postgresql restart but while it seems to have helped a little, the postgres queries still run slowly
are the unfinished queries cached even after the postgtres server restart? how could i escape this sticky situation? :(

thank you ammoQ and Tometzky for your answers, it ended up the postgres process was cleaning up a lot of stuff. it gets back to normal after a couple of minutes

Related

What happens when you get disconnected from a database during a transaction?

I was connected via ssh to a remote Postgres DB.
I was deleting all the rows of a table using SQLAlchemy
db_session.query(TableName).delete()
but before the deletion was completed I got disconnected from the internet.
Now delete seems broken.
All the rows are still there, and I can query them, but when I run the above command it begins an endless waiting.
Maybe the db is still waiting for the previous transaction to terminate?
What can I do?
I hope someone can help me. Thank you.

GCP SQL inserts are extremely slow

I have a basic Postgres instance in GCP with the following configuration:
https://i.stack.imgur.com/5FFZ6.png
I have a simple python script using pycopg2 that does inserts in a loop to the database connected through the sql auth proxy.
All the inserts are done in the same transaction.
The problem is that it is taking a couple of hours to insert around 200,000 records.
When I run the script on a local database it takes a couple of seconds.
What could be causing this huge difference?

Can not stop postgres despite immediate stop

I have this issue that is driving me nuts. Despite all my efforts, I am not able to force my postgres server to shut down. I have followed those instructions : http://www.question-defense.com/2008/10/17/pg_ctl-server-does-not-shut-down-force-postgres-to-shutdown
but still, nothing happens and all I got in the shell is
waiting for server to shut down............................................................... failed
pg_ctl: server does not shut down
Any help much appreciated.
Update: Checking the logs, I have this recurring error :
LOG: checkpoints are occurring too frequently (25 seconds apart)
HINT: Consider increasing the configuration parameter "checkpoint_segments".
After giving it a lot of thoughts especially on the way I installed it at the first place, I realize that I set up the install so the daemon would launch postgres at the start of my machine. Thus, any manual killing would simply result in the recreation of those process by the same daemon.
To resolve this problem you need to stop the daemon from working using launchctl and remove a .plist file in your postgres directory.
Good luck if you face the same problem.
You probably run with the default setting of "checkpoint_segments = 3", that produces the warnings. Your database does many writes, right? It takes some time to write all of this to disk, and your database is quite busy rotating the logfiles, instead doing real work.
If you increase checkpint_segments, you will see performance improvements, and less I/O.
For further readings: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Postgresql slow query reproduction

I've got a query that runs slowly in postgresql the first time it's run, subsequent runs are very fast. Restarting postgresql and apache don't reproduce the slow runtime, but restarting the entire environment/system does. Is there some other way to force whatever kind of caching is happening to expire without rebooting?
On Linux, try restarting Postgres and:
echo 3 > /proc/sys/vm/drop_caches

Mongodb interrupted while reindexing

I have a collection with about 3,000,000 entries that I need to reindex. This whole thing began when I tried to add a 2d index. To do this, I created an ssh tunnel, opened the mongo shell and tried to use ensureIndex. I'm in a place with a somewhat unreliable internet connection, and an hour in it ended up breaking the pipe. I then tunneled back in, opened the mongo shell and tried to look at the number of indexes using getIndexes; the new index I created showed up, but I wasn't confident it had finished, so I decided to use reIndex. In retrospect, this was stupid. The pipe broke again. Now when I open the shell and try to issue getIndexes, the shell doesn't respond.
So what should I do? Do I need to repair my database? Can I issue reIndex when I have a more reliable internet connection? Is there a way to issue reIndex without keeping the shell open, but without doing it in the background and having it take eons? (I'll check the mongod shell options to see if I can find anything, then check the node.js mongo api so I can try running something as a service on server)
And also, if I end up running reIndex as a service on the server, is there any way to check if it's working? The most frustrating part of this right now is I have no idea if my database is ok, if reIndex is still running, etc. Any help would be much appreciated. Thanks.
You don't have a problem. Mongo runs commands and only stops them if you explicitly kill the operation (db.killOp()).
You do not need to wait for the index operation to finish!
Regarding the connection problems, try using the screen command.
It enables you to create a "persistent" screen - not in the way of disk persistence, but in the means of connection-loss.