How to relate a backend pid to a windows application? - postgresql

I have postgres database that has two windows processes connected to it. One is a powershell script, the other is a c# application. Both processes run on the same box accessing the same database.
I have occasional locking problems, checking in pg_locks will give me the blocked and blocking pids.
If I look on windows task manager I can see that both of these pids have the process name 'postgres.exe'
How can I tell which pid refers to the poweshell process, and which to the c# app?

The app (or script) can tell you its pid using the function pg_backend_pid()
select pg_backend_pid();
Alternatively, you can set different application_name parameters in your apps and report this using pg_stat_activity, e.g.:
set application_name to 'my_distinct_name';
select l.*, a.application_name
from pg_locks l
join pg_stat_activity a using (pid);

Related

How to kill queries which theirs processes are no longer exists

I run:
select *
from pg_stat_activity
And there are some old queries (which still run on the DB background) which theirs python application doesn't exists (The apps crash or stopped without calling connection close command)
State wait_event backend_type
Active null parallel_backend
Is there a way to close all the queries which theirs processes are no longer exists ?
I saw this post:
Kill a postgresql session/connection
but I don't want to kill all sessions or connections, because there are some connections which gather and update important data.
I just want to close sessions (and stop queries) which theirs processes are no longer exists.
If you know or able to figure it out which are not neccessary, then you can fetch their pids from here.
SELECT pid,* FROM pg_stat_activity;
And use those pids over here along with dbname to kill those connections.
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
where datname in (<put dbname over here>)
and pid in(<put pid over here>);
Or simply use the below query to kill idle connections, which are not active.
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
where datname in (<put dbname over here>')
and state = 'idle';

Canceling running queries from cli

I am looking for a way to cancel queries that are currently running from cli.
I have found these links:
https://www.postgresql.org/docs/11/libpq-cancel.html
https://www.postgresql.org/docs/11/contrib-dblink-cancel-query.html
but seems that it is not what I am looking for.
Given the pid of the session running the query (the process ID of the corresponding backend process, which you can find in pg_stat_activity, or from ps, top, etc.), you can use:
psql -c "SELECT pg_cancel_backend(<your_pid>)"
If you're trying to kill all queries meeting some criteria (e.g. those which have been running/blocking/idle for some period of time, or those running against a particular database), something like this is often useful:
psql -c "SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE <your_conditions>"
You can also disconnect them using pg_terminate_backend(pid).
To cancel the most recently started query:
SELECT pg_cancel_backend(pid)
FROM pg_stat_activity
WHERE pid <> pg_backend_pid()
ORDER BY query_start DESC
LIMIT 1;
pg_backend_pid() is the connection you're using to run the command; without this filter, the "latest query" would be the one you're currently executing.

PostgreSQL How to check if my function is still running

Just a quick question .
I have one heavy function in PostgreSQL 9.3 , how I can check if the function is still running after several hours and how to run a function in background in psql ( my connection is unstable from time to time)
Thanks
For long running functions, it can be useful to have them RAISE LOG or RAISE NOTICE from time to time, indicating progress. If they're looping over millions of records, you might emit a log message every few thousand records.
Some people also (ab)use a SEQUENCE, where they get the nextval of the sequence in their function, and then directly read the sequence value to check progress. This is crude but effective. I prefer logging whenever possible.
To deal with disconnects, run psql on the remote side over ssh rather than connecting to the server directly over the PostgreSQL protocol. As Christian suggests, use screen so the remote psql doesn't get killed when the ssh session dies.
Alternately, you can use the traditional unix command nohup, which is available everywhere:
nohup psql -f the_script.sql </dev/null &
which will run psql in the background, writing all output and errors to a file named nohup.out.
You may also find that if you enable TCP keepalives, you don't lose remote connections anyway.
pg_stat_activity is a good hint to check if your function is still running. Also use screen or tmux on the server to ensure that it will survive a reconnect.
1 - Login to the psql console.
$ psql -U user -d database
2- Issue a \x command to format the results.
3- SELECT * from pg_stat_activity;
4- Scroll until you see your function name on the list. It should have an active status.
5- Check if there are any blocks on the table that your function relies on using:
SELECT k_locks.pid AS pid_blocking, k_activity.usename AS user_blocking, k_activity.query AS query_blocking, locks.pid AS pid_blocked, activity.usename AS user_blocked, activity.query AS query_blocked, to_char(age(now(), activity.query_start), 'HH24h:MIm:SSs') AS blocking_age FROM pg_catalog.pg_locks locks JOIN pg_catalog.pg_stat_activity activity ON locks.pid = activity.pid JOIN pg_catalog.pg_locks k_locks ON locks.locktype = k_locks.locktype and locks.database is not distinct from k_locks.database and locks.relation is not distinct from k_locks.relation and locks.page is not distinct from k_locks.page and locks.tuple is not distinct from k_locks.tuple and locks.virtualxid is not distinct from k_locks.virtualxid and locks.transactionid is not distinct from k_locks.transactionid and locks.classid is not distinct from k_locks.classid and locks.objid is not distinct from k_locks.objid and locks.objsubid is not distinct from k_locks.objsubid and locks.pid <> k_locks.pid JOIN pg_catalog.pg_stat_activity k_activity ON k_locks.pid = k_activity.pid WHERE k_locks.granted and not locks.granted ORDER BY activity.query_start;

Heroku Postgres: Too many connections. How do I kill these connections?

I have an app running on Heroku. This app has an Postgres 9.2.4 (Dev) addon installed. To access my online database I use Navicat Postgres. Sometimes Navicat doesn't cleanly close connections it sets up with the Postgres database. The result is that after a while there are 20+ open connections to the Postgres database. My Postgres installs only allows 20 simultanious connections. So with the 20+ open connections my Postgress database is now unreachable (too many connections).
I know this is a problem of Navicat and I'm trying to solve this on that end. But if it happens (that there are too many connections), how can I solve this (e.g. close all connections).
I've tried all of the following things, without result.
Closed Navicat & restarted my computer (OS X 10.9)
Restarted my Heroku application (heroku restart)
Tried to restart the online database, but I found out there is no option to do this
Manually closed all connections from OS X to the IP of the Postgres server
Restarted our router
I think it's obvious there are some 'dead' connections at the Postgres side. But how do I close them?
Maybe have a look at what heroku pg:kill can do for you? https://devcenter.heroku.com/articles/heroku-postgresql#pg-ps-pg-kill-pg-killall
heroku pg:killall will kill all open connections, but that may be a blunt instrument for your needs.
Interestingly, you can actually kill specific connections using heroku's dataclips.
To get a detailed list of connections, you can query via dataclips:
SELECT * FROM pg_stat_activity;
In some cases, you may want to kill all connections associated with an IP address (your laptop or in my case, a server that was now destroyed).
You can see how many connections belong to each client IP using:
SELECT client_addr, count(*)
FROM pg_stat_activity
WHERE client_addr is not null
AND client_addr <> (select client_addr from pg_stat_activity where pid=pg_backend_Tid())
GROUP BY client_addr;
which will list the number of connections per IP excluding the IP that dataclips itself uses.
To actually kill the connections, you pass their "pid" to pg_terminate_backend(). In the simple case:
SELECT pg_terminate_backend(1234)
where 1234 is the offending PID you found in pg_stat_activity.
In my case, I wanted to kill all connections associated with a (now dead) server, so I used:
SELECT pg_terminate_backend(pid), host(client_addr)
FROM pg_stat_activity
WHERE host(client_addr) = 'IP HERE'
1). First login into Heroku with your correct id (in case you have multiple accounts) using heroku login.
2). Then, run heroku apps to get a list of your apps and copy the name of the one which is having the PostgreSQL db installed.
3). Finally, run heroku pg:killall --app appname to get all the connections terminated.
From the Heroku documentation (emphasis is mine):
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Starter Tier (dev and basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
Because Heroku does not provide superuser access your options are rather limited to the above.
Restart server
heroku restart --app <app_name>
It will close all connection and restart.
As the superuser (eg. "postgres"), you can kill every session but your current one with a query like this:
select pg_cancel_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();
If they do not go away, you might have to use a stronger "kill", but certainly test with pg_cancel_backend() first.
select pg_terminate_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();

Force client disconnect using PostgreSQL

Is there a way to force clients to disconnect from PostgreSQL? I'm looking for the equivlent of DB2's force application all.
I'd like to do this on my development box because when I've got database consoles open, I can't load a database dump. I have to quit them first.
Kills idle processes in PostgreSQL 8.4:
SELECT procpid, (SELECT pg_terminate_backend(procpid)) as killed from pg_stat_activity
WHERE current_query LIKE '<IDLE>';
Combine pg_terminate_backend function and the pg_stat_activity system view.
This SO answer beautifully explains (full quote from araqnid between the horizontal rules, then me again):
To mark database 'applogs' as not accepting new connections:
update pg_database set datallowconn = false where datname = 'applogs';
Another possibility would be to revoke 'connect' access on the database for the client role(s).
Disconnect users from database = kill backend. So to disconnect all other users from "applogs" database, for example:
select pg_terminate_backend(procpid)
from pg_stat_activity
where datname = 'applogs' and procpid <> pg_backend_pid();
Once you've done both of those, you are the only user connected to 'applogs'. Although there might actually be a delay before the backends actually finish disconnecting?
Update from MarkJL: There is indeed a delay before the backends finish disconnecting.
Now me again: That being said, mind that the procpid column was renamed to pid in PostgreSQL 9.2 and later.
I think that this is much more helpful than the answer by Milen A. Radev which, while technically the same, does not come with usage examples and real-life suggestions.
I post my answer because I couldn't use any of them in my script, server 9.3:
psql -U postgres -c "SELECT pid, (SELECT pg_terminate_backend(pid)) as killed from pg_stat_activity WHERE datname = 'my_database_to_alter';"
In the next line, you can do anything yo want with 'my_database_to_alter'. As you can see, yo perform the query from the "postgres" database, which exists almost in every postgresql installation.
Doing by superuser and outside the problem-database itself worked perfect for me.
probably a more heavy handed approach then should be used but:
for x in `ps -eF | grep -E "postgres.*idle"| awk '{print $2}'`;do kill $x; done
I found this thread on the mailing list. It suggests using SIGTERM to cause the clients to disconnect.
Not as clean as db2 force application all.