Docker <> Postgres | log_hostname on and client_hostname still null - postgresql

I'm running docker swarm with several container\services, on of them is Postgres version 14.
I want to validate which DB connection is related to which service (by service's hostname).
There is the the table pg_stat_activity with the column client_hostname but it null, although I set log_hostname in the postgres conf to on or 1 (of course I restart the container).
Any idea why and how can I saw this value?
Thanks!

Related

Postgresql pg_dump adding public to all schema names

I'm still a relative newbie to Postgresql, so pardon if this is simple ignorance.
I've setup a active/read-only pacemaker cluster of Postgres v9.4 per the cluster labs documentation.
I'm trying to verify that both databases are indeed in sync. I'm doing the dump on both hosts and checking the diff between the output. The command I'm using is:
pg_sql -U myuser mydb >dump-node-1.sql
Pacemaker shows the database status as 'sync' and querying Postgres directly also seems to indicate the sync is good... (Host .59 is my read-only standby node)
psql -c "select client_addr,sync_state from pg_stat_replication;"
+---------------+------------+
| client_addr | sync_state |
+---------------+------------+
| 192.16.111.59 | sync |
+---------------+------------+
(1 row)
However, when I do a dump on the read-only host I end up with all my tables having 'public.' added to the front of the names. So table foo on the master node dumps as 'foo' whereas on the read-only node it dumps as 'public.foo'. I don't understand why this is happening... I had done a 9.2 Postgresql cluster in a similar setup and didn't see this issue. I don't have tables in the public schema on the master node...
Hope someone can help me understand what is going on.
Much appreciated!
Per a_horse_with_no_name, the security updates in 9.4.18 changed the way the dump is written compared to 9.4.15. I didn't catch that one node was still running an older version. The command that identified the problem was his suggestion to run:
psql -c "select version();"

Ambari server doesn't restart after removing node with cloudbreak

After adding a node to test scaling then removing that node with cloudbreak, the service ambari-server won't restart.
The error at launch is:
DB configs consistency check failed. Run "ambari-server start --skip-database-check" to skip. You may try --auto-fix-database flag to attempt to fix issues automatically. If you use this "--skip-database-check" option, do not make any changes to your cluster topology or perform a cluster upgrade until you correct the database consistency issues. See /var/log/ambari-server/ambari-server-check-database.log for more details on the consistency issues.
Looking the logs doesn't say much more. I tried restarting postgres, sometimes it works, like 1 on 10 times (HOW is it possible ?)
I went deeper in my reasonning rather than just restarting postgres.
I opened the ambari table to look in it:
sudo su - postgres
psql ambari -U ambari -W -p 5432
(password is bigdata)
and when I looked in tables topology_logical_request, topology_request and topology_hostgroup, I saw that the cluster register a remove request, only an adding request:
ambari=> select * from topology_logical_request;
id | request_id | description
----+------------+-----------------------------------------------------------
1 | 1 | Logical Request: Provision Cluster 'sentelab-perf'
62 | 51 | Logical Request: Scale Cluster 'sentelab-perf' (+1 hosts)
Check the ids to delete (track all requests with adding node operation) and begin to delete them (order matters):
delete from topology_hostgroup where id = 51;
delete from topology_logical_request where id = 62;
DELETE FROM topology_request WHERE id = 51;
close with \q, restart ambari-server, and it works !

Heroku Postgres: Too many connections. How do I kill these connections?

I have an app running on Heroku. This app has an Postgres 9.2.4 (Dev) addon installed. To access my online database I use Navicat Postgres. Sometimes Navicat doesn't cleanly close connections it sets up with the Postgres database. The result is that after a while there are 20+ open connections to the Postgres database. My Postgres installs only allows 20 simultanious connections. So with the 20+ open connections my Postgress database is now unreachable (too many connections).
I know this is a problem of Navicat and I'm trying to solve this on that end. But if it happens (that there are too many connections), how can I solve this (e.g. close all connections).
I've tried all of the following things, without result.
Closed Navicat & restarted my computer (OS X 10.9)
Restarted my Heroku application (heroku restart)
Tried to restart the online database, but I found out there is no option to do this
Manually closed all connections from OS X to the IP of the Postgres server
Restarted our router
I think it's obvious there are some 'dead' connections at the Postgres side. But how do I close them?
Maybe have a look at what heroku pg:kill can do for you? https://devcenter.heroku.com/articles/heroku-postgresql#pg-ps-pg-kill-pg-killall
heroku pg:killall will kill all open connections, but that may be a blunt instrument for your needs.
Interestingly, you can actually kill specific connections using heroku's dataclips.
To get a detailed list of connections, you can query via dataclips:
SELECT * FROM pg_stat_activity;
In some cases, you may want to kill all connections associated with an IP address (your laptop or in my case, a server that was now destroyed).
You can see how many connections belong to each client IP using:
SELECT client_addr, count(*)
FROM pg_stat_activity
WHERE client_addr is not null
AND client_addr <> (select client_addr from pg_stat_activity where pid=pg_backend_Tid())
GROUP BY client_addr;
which will list the number of connections per IP excluding the IP that dataclips itself uses.
To actually kill the connections, you pass their "pid" to pg_terminate_backend(). In the simple case:
SELECT pg_terminate_backend(1234)
where 1234 is the offending PID you found in pg_stat_activity.
In my case, I wanted to kill all connections associated with a (now dead) server, so I used:
SELECT pg_terminate_backend(pid), host(client_addr)
FROM pg_stat_activity
WHERE host(client_addr) = 'IP HERE'
1). First login into Heroku with your correct id (in case you have multiple accounts) using heroku login.
2). Then, run heroku apps to get a list of your apps and copy the name of the one which is having the PostgreSQL db installed.
3). Finally, run heroku pg:killall --app appname to get all the connections terminated.
From the Heroku documentation (emphasis is mine):
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Starter Tier (dev and basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
Because Heroku does not provide superuser access your options are rather limited to the above.
Restart server
heroku restart --app <app_name>
It will close all connection and restart.
As the superuser (eg. "postgres"), you can kill every session but your current one with a query like this:
select pg_cancel_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();
If they do not go away, you might have to use a stronger "kill", but certainly test with pg_cancel_backend() first.
select pg_terminate_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();

How can I tell if PostgreSQL's Autovacuum is running on UNIX?

How can one tell if the autovacuum daemon in Postgres 9.x is running and maintaining the database cluster?
PostgreSQL 9.3
Determine if Autovacuum is Running
This is specific to Postgres 9.3 on UNIX.
For Windows, see this question.
Query Postgres System Table
SELECT
schemaname, relname,
last_vacuum, last_autovacuum,
vacuum_count, autovacuum_count -- not available on 9.0 and earlier
FROM pg_stat_user_tables;
Grep System Process Status
$ ps -axww | grep autovacuum
24352 ?? Ss 1:05.33 postgres: autovacuum launcher process (postgres)
Grep Postgres Log
# grep autovacuum /var/log/postgresql
LOG: autovacuum launcher started
LOG: autovacuum launcher shutting down
If you want to know more about the autovacuum activity, set log_min_messages to DEBUG1..DEBUG5. The SQL command VACUUM VERBOSE will output information at log level INFO.
Regarding the Autovacuum Daemon, the Posgres docs state:
In the default configuration, autovacuuming is enabled and the related configuration parameters are appropriately set.
See Also:
http://www.postgresql.org/docs/current/static/routine-vacuuming.html
http://www.postgresql.org/docs/current/static/runtime-config-autovacuum.html
I'm using:
select count(*) from pg_stat_activity where query like 'autovacuum:%';
in collectd to know how many autovacuum are running concurrently.
You may need to create a security function like this:
CREATE OR REPLACE FUNCTION public.pg_autovacuum_count() RETURNS bigint
AS 'select count(*) from pg_stat_activity where query like ''autovacuum:%'';'
LANGUAGE SQL
STABLE
SECURITY DEFINER;
and call that from collectd.
In earlier Postgres, "query" was "current_query" so change it according to what works.
You can also run pg_activity to see the currently running queries on your database. I generally leave a terminal open with this running most of the time anyway as it's very useful.
set log_autovacuum_min_duration to the time length that you desire and the autovacuum execution exceeds the time length will be logged.

Restart Heroku Postgres Dev DB

I got this error from an Play 2.0.3 java application. How could I restart Heroku Postgres Dev DB? I could not find any instructions to restart the DB on Heroku help center.
app[web.1]: Caused by: org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections
The error mesage you have there isn't a reason to restart the database; it isn't a database problem. Your application is holding too many connections, probably because you forgot to set up its connection pool. That isn't a DB server problem and you can fix it without restarting the DB server.
If you stop your Play application or reconfigure its connection pool the problem will go away.
Another option is to put your Heroku instance in maintenance mode then take it out again.
Since heroku doesn't allow you to connect as a superuser (for good reasons) you can't use that reserved superuser slot to connect and manage connections like you would with normal PostgreSQL.
See also:
Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
If you're a non-heroku user who found this:
With normal PostgreSQL you can disconnect your client from the server end end using a PostgreSQL connection to your server. See how it says there's a slot reserved for "superuser connections" ? Connect to Pg as a superuser (postgres user by default) using PgAdmin-III or psql.
Once you're connected you can see other clients with:
SELECT * FROM pg_stat_activity;
If you want to terminate every connection except your own you can run:
SELECT procpid, pg_terminate_backend(procpid)
FROM pg_stat_activity WHERE procpid <> pg_backend_pid();
Add AND datname = current_database and/or AND usename = <target-user-name> as appropriate.
I think I should have just added this in reply to the previous answer, but I couldn't figure out how to do that, so...
As an update to Liron Yahdav's comment in the accepted answer's thread: the "non-heroku users who found this" solution worked for me on a Heroku PostgreSQL dev database, but with a slight modification to the query Liron provided. Here is my modified query: SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND usename='<your_username>';
It seems that procpid has changed to pid.
There is no way to restart the whole database. Though, heroku offers a simple way to stop all connections which solves the problem in the majority of cases:
heroku pg:killall