How to suppress INFO messages when running psql scripts - postgresql

I'm seeing INFO messages when I run my tests and I thought that I had gotten rid of them by setting the client_min_messages PGOPTION. Here's my command:
PGOPTIONS='--client-min-messages=warning' \
psql -h localhost \
-p 5432 \
-d my_db \
-U my_user \
--no-align \
--field-separator '|' \
--pset footer \
--quiet \
-v AUTOCOMMIT=off \
-X \
-v VERBOSITY=terse \
-v ON_ERROR_STOP=1 \
--pset pager=off \
-f tests/test.sql \
-o "$test_results"
Can someone advise me on how to turn off the INFO messages?

This works for me: Postgres 9.1.4 on Debian GNU Linux with bash:
env PGOPTIONS='-c client_min_messages=WARNING' psql ...
(Still works for Postgres 12 on Ubuntu 18.04 LTS with bash.)
It's also what the manual suggests. In most shells, setting environment variables also works without an explicit leading env. See maxschlepzig's comment.
Note, however, that there is no message level INFO for client_min_messages.
That's only applicable to log_min_messages and log_min_error_statement.

Related

Singularity connection differences between 'instance start' and 'run' with custom network portmap and ssl self-signed certificate

I have built a singularity container from the docker hub registry like so:
sudo singularity build \
postgres12.sif \
docker://postgres:12
And can successfully run the container like:
sudo singularity run \
-B postgres12_data:/var/lib/postgresql/data \
-B postgres12_run:/var/run/postgresql \
--net \
--network-args "portmap=9932:5432/tcp" \
postgres12.sif \
-c ssl=on \
-c ssl_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem \
-c ssl_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
It requires starting with root, which I don't love, but subsequent connection is available with:
psql "sslmode=require" -h 0.0.0.0 -p 9932 -U postgres
Password for user postgres:
psql (13.5 (Debian 13.5-0+deb11u1), server 12.10 (Debian 12.10-1.pgdg110+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres-# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
So far, so good.
However, when I try to start this as an instance, e.g.:
sudo singularity instance start \
-B postgres12_data:/var/lib/postgresql/data \
-B postgres12_run:/var/run/postgresql \
--net \
--network-args "portmap=9932:5432/tcp" \
postgres12.sif \
postgres-ssl-01 \
-c ssl=on \
-c ssl_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem \
-c ssl_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
Then I cannot access the postgres service. I can see the instance, however:
sudo singularity instance list
INSTANCE NAME PID IP IMAGE
postgres-ssl-01 12345 [IP Adresss] /PATH/TO/postgres12.sif
Can anyone offer any insight into what I'm doing wrong? Thank you!

Install Postrouting in docker postgis-postgresql container

I created a postgis database with docker using the postgis image as usual
docker run -d \
--name mypostgres \
-p 5555:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/postgres/data:/var/lib/postgresql/data \
-v /data/postgres/lib:/usr/lib/postgresql/10/lib \
postgis/postgis:10-3.0
now I can see all extensiones in the database,it has postgis, it's ok. but not have postrouting.
so I pull another image:
docker pull pgrouting/pgrouting:11-3.1-3.1.3
and do the same command:
docker run -d \
--name pgrouting \
-p 5556:5432 \
-e POSTGRES_PASSWORD=postgres \
-v /data/pgrouting/data/:/var/lib/postgresql/data/ \
-v /data/postgres/lib/:/usr/lib/postgresql/11/lib/ \
pgrouting/pgrouting:11-3.1-3.1.3
but when I exec this command:
create extensione postrouting
I get this error message:
could not load library "/usr/lib/postgresql/11/lib/plpgsql.so": /usr/lib/postgresql/11/lib/plpgsql.so: undefined symbol: AllocSetContextCreate
I can't solve this problem.Can anyone help me?
thanks a lot

Hasura use SSL certificates for Postgres connection

I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?
You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest

how can I programmatically get the hostname for the BIGSQL_HEAD?

I need to programmatically retrieve the BIGSQL_HEAD hostname of my BigInsihgts on Cloud enterprise cluster from a script so I can automate connecting to that host.
The BIGSQL_HEAD hostname is in Ambari - how can I retrieve this information using 'standard' unix tools?
BI_HOST=your_biginsights_mastermanager_hostname
BI_USER=your_biginsights_username
BI_PASS=your_biginsights_password
BASE_URL=https://${BI_HOST}:9443
CLUSTER_NAME=$(curl -k -s -u ${BI_USER}:${BI_PASS} -H 'X-Requested-By:ambari' \
${BASE_URL}/api/v1/clusters | python -c \
'import json,sys;print json.load(sys.stdin)["items"][0]["Clusters"]["cluster_name"]')
BIGSQL_HOST=$(curl -k -s -u ${BI_USER}:${BI_PASS} -H 'X-Requested-By:ambari' \
${BASE_URL}/api/v1/clusters/${CLUSTER_NAME}/services/BIGSQL/components/BIGSQL_HEAD | \
python -c \
'import json,sys;print json.load(sys.stdin)["host_components"][0]["HostRoles"]["host_name"]')
echo ${BIGSQL_HOST}
These commands can be run on the BigInsight cluster or your client machine.
Thanks to Pierre Regazzoni for the ambari code.

how to promote master, after failover on postgresql with docker

first of all I'm using this setup postgres-docker-cluster, everything works fine during the fail-over, i stop the master and the slave1 take its place, but if i turn the master back on im not sure how to promoted to master again, I would appreciated any pointers on the right direction, do i need to manually promote it? sorry I'm pretty new at this concept(ha).
This docker uses repmgr, pgpool2 and postgres 9.5.
some info on the docker
postgresql-cluster-pgsql
postgresql-cluster-pgpool
docker-compose.yml
So i figure out how to sort of solve the problem,
Create the containers manually
Master
docker run \
-e INITIAL_NODE_TYPE='master' \
-e NODE_ID=1 \
-e NODE_NAME='node1' \
-e CLUSTER_NODE_NETWORK_NAME='pgmaster' \
-e POSTGRES_PASSWORD='monkey_pass' \
-e POSTGRES_USER='monkey_user' \
-e POSTGRES_DB='monkey_db' \
-e CLUSTER_NODE_REGISTER_DELAY=5 \
-e REPLICATION_DAEMON_START_DELAY=120 \
-e CLUSTER_NAME='pg_cluster' \
-e REPLICATION_DB='replication_db' \
-e REPLICATION_USER='replication_user' \
-e REPLICATION_PASSWORD='replication_pass' \
-v cluster-archives:/var/cluster_archive \
-p 5432:5432 \
--net mynet \
--net-alias pgmaster \
--name pgmastertest \
paunin/postgresql-cluster-pgsql
Slave
docker run \
-e INITIAL_NODE_TYPE='standby' \
-e NODE_ID=2 \
-e NODE_NAME='node2' \
-e REPLICATION_PRIMARY_HOST='pgmaster' \
-e CLUSTER_NODE_NETWORK_NAME='pgslave1' \
-e REPLICATION_UPSTREAM_NODE_ID=1 \
-v cluster-archives:/var/cluster_archive \
-p 5441:5432 \
--net mynet \
--net-alias pgslave1 \
--name pgslavetest \
paunin/postgresql-cluster-pgsql
Pgpool
docker run \
-e PCP_USER='pcp_user' \
-e PCP_PASSWORD='pcp_pass' \
-e PGPOOL_START_DELAY=120 \
-e REPLICATION_USER='replication_user' \
-e REPLICATION_PASSWORD='replication_pass' \
-e SEARCH_PRIMARY_NODE_TIMEOUT=5 \
-e DB_USERS='monkey_user:monkey_pass' \
-e BACKENDS='0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::' \
-p 5430:5432 \
-p 9898:9898 \
--net mynet \
--net-alias pgpool \
--name pgpooltest \
paunin/postgresql-cluster-pgpool
on the line BACKENDS='0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::' \ you can add more slaves to pgppool
Stop master pgmaster, slave pgslave1 would be promoted after a few secs,
Add new slave container docker run \
-e INITIAL_NODE_TYPE='standby' \
-e NODE_ID=3 \
-e NODE_NAME='node1' \
-e REPLICATION_PRIMARY_HOST='pgslave1' \
-e CLUSTER_NODE_NETWORK_NAME='pgmaster' \
-e REPLICATION_UPSTREAM_NODE_ID=2 \
-v cluster-archives:/var/cluster_archive \
-p 5432:5432 \
--net mynet \
--net-alias pgmaster \
--name pgmastertest3 \
paunin/postgresql-cluster-pgsql
on the following lines
-e REPLICATION_PRIMARY_HOST='pgslave1' \ make sure you are pointing to the alias of the new master (pgslave1).
-e REPLICATION_UPSTREAM_NODE_ID=2 \ make sure you are pointing to the new master node id (2).
-e NODE_ID=3 \ make sure this id doesn't exists on the table repl_nodes.
--net-alias pgmaster \ u can use the one from your old master, or use one that you already added on pgpool BACKENDS='0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::' \ otherwise if the new master fails repmgr wont be able to recover it.
Its a little manual, but it does what i need, and thats to add a new slave to the new master.
When the Master fails the PostgreSQL cluster elects another master from the stanby nodes (based on the node weight in the cluster). So when the ex-master is finally brought back to life the cluster remains loyal to it's current master ,however the ex-master is initiated back in the cluster but this time as a standby. All of that is completely managed by the PostgreSQL and not the Pgpool.
so what you would expect is that if the new master (ex-standby) fails (or scaled to 0) then the cluster would failover to the ex-master and elect it as a leader once again and when the standby is scaled up again it would join as a standby and things are back to normal. And that is exactly what the PostgreSQL cluster would do.
But most probably the Pgpool service would fail at that moment because whenever a node fails the Pgpool sign that node status as DOWN and even if that node comes back to live it will not notify the pgpool and your traffic would not reach that node.
so if you checked the recovered node status - after its recovery - on the pgpool container using the PCP commands :
pcp_node_info -U pcp_user -h localhost -n 1 # master node id
pgmaster 5432 down 1
so what you have to do is to re-attach the once fallen node back to the Pgpool manually using:
pcp_attach_node -U pcp_user -h localhost -n 1 # master node id
--- executed successfully ---
pcp_node_info -U pcp_user -h localhost -n 1 # master node id
pgmaster 5432 up 1
At this point the pgpool recognizes the ex-master node once again and can direct traffic to it.
After that whenever you remove ( scale to 0 ) the ex-standby (now master) service the whole solution (PostgreSQL - Pgpool) would failover to the actual master and now you could bring standby up again and re-attach it to the pgpool.
P.S. The downtime is only the failover downtime of the pgpool and the pgpool service will maintain its original configuration nothing added nothing restarted (well except for the PostgresQL node that failed hopefully :D ).