I am looking for a way to cancel queries that are currently running from cli.
I have found these links:
https://www.postgresql.org/docs/11/libpq-cancel.html
https://www.postgresql.org/docs/11/contrib-dblink-cancel-query.html
but seems that it is not what I am looking for.
Given the pid of the session running the query (the process ID of the corresponding backend process, which you can find in pg_stat_activity, or from ps, top, etc.), you can use:
psql -c "SELECT pg_cancel_backend(<your_pid>)"
If you're trying to kill all queries meeting some criteria (e.g. those which have been running/blocking/idle for some period of time, or those running against a particular database), something like this is often useful:
psql -c "SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE <your_conditions>"
You can also disconnect them using pg_terminate_backend(pid).
To cancel the most recently started query:
SELECT pg_cancel_backend(pid)
FROM pg_stat_activity
WHERE pid <> pg_backend_pid()
ORDER BY query_start DESC
LIMIT 1;
pg_backend_pid() is the connection you're using to run the command; without this filter, the "latest query" would be the one you're currently executing.
Related
I run:
select *
from pg_stat_activity
And there are some old queries (which still run on the DB background) which theirs python application doesn't exists (The apps crash or stopped without calling connection close command)
State wait_event backend_type
Active null parallel_backend
Is there a way to close all the queries which theirs processes are no longer exists ?
I saw this post:
Kill a postgresql session/connection
but I don't want to kill all sessions or connections, because there are some connections which gather and update important data.
I just want to close sessions (and stop queries) which theirs processes are no longer exists.
If you know or able to figure it out which are not neccessary, then you can fetch their pids from here.
SELECT pid,* FROM pg_stat_activity;
And use those pids over here along with dbname to kill those connections.
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
where datname in (<put dbname over here>)
and pid in(<put pid over here>);
Or simply use the below query to kill idle connections, which are not active.
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
where datname in (<put dbname over here>')
and state = 'idle';
I have executed some query from my remote PostgreSQL database with the help of a simple jdbc java program for which I want process Id. Can anyone suggest how can I get the process id for the same query I have executed?
The process ID is assigned when you open the connection to the database server.
It's per connection, not an ID "per query"!
So before you run your actual query, you can run:
select pg_backend_pid();
to the get the PID assigned to your JDBC connection. Then you can e.g. log it or print it somehow so that you know it once your query is running.
I have postgres database that has two windows processes connected to it. One is a powershell script, the other is a c# application. Both processes run on the same box accessing the same database.
I have occasional locking problems, checking in pg_locks will give me the blocked and blocking pids.
If I look on windows task manager I can see that both of these pids have the process name 'postgres.exe'
How can I tell which pid refers to the poweshell process, and which to the c# app?
The app (or script) can tell you its pid using the function pg_backend_pid()
select pg_backend_pid();
Alternatively, you can set different application_name parameters in your apps and report this using pg_stat_activity, e.g.:
set application_name to 'my_distinct_name';
select l.*, a.application_name
from pg_locks l
join pg_stat_activity a using (pid);
Just a quick question .
I have one heavy function in PostgreSQL 9.3 , how I can check if the function is still running after several hours and how to run a function in background in psql ( my connection is unstable from time to time)
Thanks
For long running functions, it can be useful to have them RAISE LOG or RAISE NOTICE from time to time, indicating progress. If they're looping over millions of records, you might emit a log message every few thousand records.
Some people also (ab)use a SEQUENCE, where they get the nextval of the sequence in their function, and then directly read the sequence value to check progress. This is crude but effective. I prefer logging whenever possible.
To deal with disconnects, run psql on the remote side over ssh rather than connecting to the server directly over the PostgreSQL protocol. As Christian suggests, use screen so the remote psql doesn't get killed when the ssh session dies.
Alternately, you can use the traditional unix command nohup, which is available everywhere:
nohup psql -f the_script.sql </dev/null &
which will run psql in the background, writing all output and errors to a file named nohup.out.
You may also find that if you enable TCP keepalives, you don't lose remote connections anyway.
pg_stat_activity is a good hint to check if your function is still running. Also use screen or tmux on the server to ensure that it will survive a reconnect.
1 - Login to the psql console.
$ psql -U user -d database
2- Issue a \x command to format the results.
3- SELECT * from pg_stat_activity;
4- Scroll until you see your function name on the list. It should have an active status.
5- Check if there are any blocks on the table that your function relies on using:
SELECT k_locks.pid AS pid_blocking, k_activity.usename AS user_blocking, k_activity.query AS query_blocking, locks.pid AS pid_blocked, activity.usename AS user_blocked, activity.query AS query_blocked, to_char(age(now(), activity.query_start), 'HH24h:MIm:SSs') AS blocking_age FROM pg_catalog.pg_locks locks JOIN pg_catalog.pg_stat_activity activity ON locks.pid = activity.pid JOIN pg_catalog.pg_locks k_locks ON locks.locktype = k_locks.locktype and locks.database is not distinct from k_locks.database and locks.relation is not distinct from k_locks.relation and locks.page is not distinct from k_locks.page and locks.tuple is not distinct from k_locks.tuple and locks.virtualxid is not distinct from k_locks.virtualxid and locks.transactionid is not distinct from k_locks.transactionid and locks.classid is not distinct from k_locks.classid and locks.objid is not distinct from k_locks.objid and locks.objsubid is not distinct from k_locks.objsubid and locks.pid <> k_locks.pid JOIN pg_catalog.pg_stat_activity k_activity ON k_locks.pid = k_activity.pid WHERE k_locks.granted and not locks.granted ORDER BY activity.query_start;
Is there a way to force clients to disconnect from PostgreSQL? I'm looking for the equivlent of DB2's force application all.
I'd like to do this on my development box because when I've got database consoles open, I can't load a database dump. I have to quit them first.
Kills idle processes in PostgreSQL 8.4:
SELECT procpid, (SELECT pg_terminate_backend(procpid)) as killed from pg_stat_activity
WHERE current_query LIKE '<IDLE>';
Combine pg_terminate_backend function and the pg_stat_activity system view.
This SO answer beautifully explains (full quote from araqnid between the horizontal rules, then me again):
To mark database 'applogs' as not accepting new connections:
update pg_database set datallowconn = false where datname = 'applogs';
Another possibility would be to revoke 'connect' access on the database for the client role(s).
Disconnect users from database = kill backend. So to disconnect all other users from "applogs" database, for example:
select pg_terminate_backend(procpid)
from pg_stat_activity
where datname = 'applogs' and procpid <> pg_backend_pid();
Once you've done both of those, you are the only user connected to 'applogs'. Although there might actually be a delay before the backends actually finish disconnecting?
Update from MarkJL: There is indeed a delay before the backends finish disconnecting.
Now me again: That being said, mind that the procpid column was renamed to pid in PostgreSQL 9.2 and later.
I think that this is much more helpful than the answer by Milen A. Radev which, while technically the same, does not come with usage examples and real-life suggestions.
I post my answer because I couldn't use any of them in my script, server 9.3:
psql -U postgres -c "SELECT pid, (SELECT pg_terminate_backend(pid)) as killed from pg_stat_activity WHERE datname = 'my_database_to_alter';"
In the next line, you can do anything yo want with 'my_database_to_alter'. As you can see, yo perform the query from the "postgres" database, which exists almost in every postgresql installation.
Doing by superuser and outside the problem-database itself worked perfect for me.
probably a more heavy handed approach then should be used but:
for x in `ps -eF | grep -E "postgres.*idle"| awk '{print $2}'`;do kill $x; done
I found this thread on the mailing list. It suggests using SIGTERM to cause the clients to disconnect.
Not as clean as db2 force application all.