PgBouncer and auth to PostgreSQL - postgresql

pgbouncer version 1.7.2
psql (9.5.6)
I try use auth_hba_file (/var/lib/pgsql/9.5/data/pg_hba.conf) in PgBouncer.
Config pgbouncer.ini
postgres = host=localhost port=5432 dbname=postgres user=postgres
test = host=localhost port=5432 dbname=test user=test
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_type = hba
auth_hba_file = /var/lib/pgsql/9.5/data/pg_hba.conf
admin_users = postgres
stats_users = stats, postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20
cat pg_hba.conf | grep -v "#" | grep -v "^$"
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
host test test 10.255.4.0/24 md5
psql -h 10.233.4.16 -p 5432 -U test
Password for user test:
psql (9.5.6)
Type "help" for help.
test=> \q
psql -h 10.233.4.16 -p 6432 -U test
psql: ERROR: No such user: test
tail -fn10 /var/log/pgbouncer/pgbouncer.log
LOG C-0x78f7e0: (nodb)/(nouser)#10.255.4.245:8963 closing because: No such user: test (age=0)
WARNING C-0x78f7e0: (nodb)/(nouser)#10.255.4.245:8963 Pooler Error: No such user: test
LOG C-0x78f7e0: (nodb)/(nouser)#10.255.4.245:8963 login failed: db=test user=test
But i cannot connect to postgresql (using PgBouncer) using pg_hba.conf
Can somebody help?
May you have example for use auth_hba_file.
Thanks
I changed config:
[root#dev-metrics2 pgbouncer]# cat pgbouncer.ini | grep -v ";" | grep -v "^$" | grep -v "#"
[databases]
postgres = host=localhost port=5432 dbname=postgres user=postgres
test = host=localhost port=5432 dbname=test auth_user=test
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_query = SELECT usename, passwd FROM pg_shadow WHERE usename=$1
admin_users = postgres
stats_users = stats, postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20
Drop and Create user and DB
[local]:5432 postgres#postgres # DROP DATABASE test;
DROP DATABASE
[local]:5432 postgres#postgres # DROP USER test ;
DROP ROLE
[local]:5432 postgres#postgres # CREATE USER test with password 'test';
CREATE ROLE
[local]:5432 postgres#postgres # CREATE DATABASE test with owner test;
CREATE DATABASE
PGPASSWORD=test psql -h 10.233.4.16 -p 6432 -U test
Password for user test:
psql: ERROR: Auth failed
tail -fn1 /var/log/pgbouncer/pgbouncer.log
LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
LOG C-0x17b57a0: test/test#10.255.4.245:3069 login attempt: db=test user=test tls=no
LOG C-0x17b57a0: test/test#10.255.4.245:3069 closing because: client unexpected eof (age=0)
LOG C-0x17b57a0: test/test#10.255.4.245:3070 login attempt: db=test user=test tls=no
LOG C-0x17b57a0: test/test#10.255.4.245:3070 closing because: Auth failed (age=0)
WARNING C-0x17b57a0: test/test#10.255.4.245:3070 Pooler Error: Auth failed
Work config:
cat pgbouncer.ini | grep -v ";" | grep -v "^$" | grep -v "#"
[databases]
*= port=5432 auth_user=postgres
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_query = SELECT usename, passwd FROM pg_shadow WHERE usename=$1
admin_users = postgres
stats_users = stats, postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20

Try put space
*= port=5432 auth_user=postgres # old string
* = port=5432 auth_user=postgres # new string
work for me

Related

How to exit psql depending on the outcome of a query

cd C:\Program Files\PostgreSQL\12\bin
psql.exe -v v1="test" -h localhost -d postgres -U postgres -p 5432 -a -q -f /elan/Validate_files.sql
....
Select case when current_date-date(a.timestamp_1) >1 then 'No' Else 'Yes' End as Check
from elan.temp_file_names a
where a.filename not in (select b.filename from elan.temp_previous_names b)
and position('ELAN_CLAIMS' in a.filename)>1 order by timestamp_1 desc
\gset
If Check is No, I want to end the psql process, how do I do that?
Change the CASE expression to return a boolean:
CASE WHEN ... THEN TRUE ELSE FALSE
END AS check ... \gset
Then use \if:
\if :check
\q
\endif

Missing FROM-clause entry for table x

I've got a mistake where I can't put data from this query into the csv file.
Error is
Missing FROM-clause entry for table x
psql -t -A -F ';' -h localhost -U username -c "with x as
(select created::timestamp(0), phone_number, from_addr, to_addr, price,
completion_status_name from archived_order where ds_id = 510)
select x.created, x.phone_number, x.from_addr, x.to_addr, x.price,
x.completion_status_name order by x.created;" > someName.csv
psql -t -A -F ';' -h localhost -U username -c "with x as
(select created::timestamp(0), phone_number, from_addr, to_addr, price,
completion_status_name from archived_order where ds_id = 510)
select x.created, x.phone_number, x.from_addr, x.to_addr, x.price,
x.completion_status_name **from x** order by x.created;" > someName.csv

Convert Tables in Postgresql to Shapefile

So far I have loaded all the parcel tables (with geometry information) in Alaska to PostgreSQL. The tables are originally stored in dump format. Now, I want to convert each table in Postgres to shapefile through cmd interface using ogr2ogr.
My code is something like below:
ogr2ogr -f "ESRI Shapefile" "G:\...\Projects\Dataset\Parcel\test.shp" PG:"dbname=parceldb host=localhost port=5432 user=postgres password=postgres" -sql "SELECT * FROM ak_fairbanks"
However, the system kept returning me this info: Unable to open datasource
PG:dbname='parceldb' host='localhost' port='5432' user='postgres' password='postgres'
With the following drivers.
There is pgsql2shp option also available. For this you need to have this utility in your system.
The command that can be follow for this conversion is
pgsql2shp -u <username> -h <hostname> -P <password> -p 5434 -f <file path to save shape file> <database> [<schema>.]<table_name>
This command has other options also which can be seen on this link.
Exploring this case based on the comments in another answer, I decided to share my Bash scripts and my ideas.
Exporting multiple tables
To export many tables from a specific schema, I use the following script.
#!/bin/bash
source ./pgconfig
export PGPASSWORD=$password
# if you want filter, set the tables names into FILTER variable below and removing the character # to uncomment that.
# FILTER=("table_name_a" "table_name_b" "table_name_c")
#
# Set the output directory
OUTPUT_DATA="/tmp/pgsql2shp/$database"
#
#
# Remove the Shapefiles after ZIP
RM_SHP="yes"
# Define where pgsql2shp is and format the base command
PG_BIN="/usr/bin"
PG_CON="-d $database -U $user -h $host -p $port"
# creating output directory to put files
mkdir -p "$OUTPUT_DATA"
SQL_TABLES="select table_name from information_schema.tables where table_schema = '$schema'"
SQL_TABLES="$SQL_TABLES and table_type = 'BASE TABLE' and table_name != 'spatial_ref_sys';"
TABLES=($($PG_BIN/psql $PG_CON -t -c "$SQL_TABLES"))
export_shp(){
SQL="$1"
TB="$2"
pgsql2shp -f "$OUTPUT_DATA/$TB" -h $host -p $port -u $user $database "$SQL"
zip -j "$OUTPUT_DATA/$TB.zip" "$OUTPUT_DATA/$TB.shp" "$OUTPUT_DATA/$TB.shx" "$OUTPUT_DATA/$TB.prj" "$OUTPUT_DATA/$TB.dbf" "$OUTPUT_DATA/$TB.cpg"
}
for TABLE in ${TABLES[#]}
do
DATA_QUERY="SELECT * FROM $schema.$TABLE"
SHP_NAME="$TABLE"
if [[ ${#FILTER[#]} -gt 0 ]]; then
echo "Has filter by table name"
if [[ " ${FILTER[#]} " =~ " ${TABLE} " ]]; then
export_shp "$DATA_QUERY" "$SHP_NAME"
fi
else
export_shp "$DATA_QUERY" "$SHP_NAME"
fi
# remove intermediate files
if [[ "$RM_SHP" = "yes" ]]; then
rm -f $OUTPUT_DATA/$SHP_NAME.{shp,shx,prj,dbf,cpg}
fi
done
Splitting data into multiple files
To avoid the problem of large tables when pgsql2shp does not write data to the shapefile, we can adopt data splitting using the paging strategy. In Postgres we can use LIMIT, OFFSET and ORDER BY for paging.
Applying this method and considering that your table has a primary key used to sort the data in my example script.
#!/bin/bash
source ./pgconfig
export PGPASSWORD=$password
# if you want filter, set the tables names into FILTER variable below and removing the character # to uncomment that.
# FILTER=("table_name_a" "table_name_b" "table_name_c")
#
# Set the output directory
OUTPUT_DATA="/tmp/pgsql2shp/$database"
#
#
# Remove the Shapefiles after ZIP
RM_SHP="yes"
# Define where pgsql2shp is and format the base command
PG_BIN="/usr/bin"
PG_CON="-d $database -U $user -h $host -p $port"
# creating output directory to put files
mkdir -p "$OUTPUT_DATA"
SQL_TABLES="select table_name from information_schema.tables where table_schema = '$schema'"
SQL_TABLES="$SQL_TABLES and table_type = 'BASE TABLE' and table_name != 'spatial_ref_sys';"
TABLES=($($PG_BIN/psql $PG_CON -t -c "$SQL_TABLES"))
export_shp(){
SQL="$1"
TB="$2"
pgsql2shp -f "$OUTPUT_DATA/$TB" -h $host -p $port -u $user $database "$SQL"
zip -j "$OUTPUT_DATA/$TB.zip" "$OUTPUT_DATA/$TB.shp" "$OUTPUT_DATA/$TB.shx" "$OUTPUT_DATA/$TB.prj" "$OUTPUT_DATA/$TB.dbf" "$OUTPUT_DATA/$TB.cpg"
}
for TABLE in ${TABLES[#]}
do
GET_PK="SELECT a.attname "
GET_PK="${GET_PK}FROM pg_index i "
GET_PK="${GET_PK}JOIN pg_attribute a ON a.attrelid = i.indrelid AND a.attnum = ANY(i.indkey) "
GET_PK="${GET_PK}WHERE i.indrelid = 'test'::regclass AND i.indisprimary"
PK=($($PG_BIN/psql $PG_CON -t -c "$GET_PK"))
MAX_ROWS=($($PG_BIN/psql $PG_CON -t -c "SELECT COUNT(*) FROM $schema.$TABLE"))
LIMIT=10000
OFFSET=0
# base query
DATA_QUERY="SELECT * FROM $schema.$TABLE"
# continue until all data are fetched.
while [ $OFFSET -le $MAX_ROWS ]
do
DATA_QUERY_P="$DATA_QUERY ORDER BY $PK OFFSET $OFFSET LIMIT $LIMIT"
OFFSET=$(( OFFSET+LIMIT ))
SHP_NAME="${TABLE}_${OFFSET}"
if [[ ${#FILTER[#]} -gt 0 ]]; then
echo "Has filter by table name"
if [[ " ${FILTER[#]} " =~ " ${TABLE} " ]]; then
export_shp "$DATA_QUERY_P" "$SHP_NAME"
fi
else
export_shp "$DATA_QUERY_P" "$SHP_NAME"
fi
# remove intermediate files
if [[ "$RM_SHP" = "yes" ]]; then
rm -f $OUTPUT_DATA/$SHP_NAME.{shp,shx,prj,dbf,cpg}
fi
done
done
Common config file
Configuration file for PostgreSQL connection used in both examples (pgconfig):
user="postgres"
host="my_ip_or_hostname"
port="5432"
database="my_database"
schema="my_schema"
password="secret"
Another strategy is to choose GeoPackage as the output file that supports a larger file size than the shapefile format, maintaining portability across Operating Systems and having sufficient support in GIS softwares.
ogr2ogr -f GPKG output_file.gpkg PG:"host=my_ip_or_hostname user=postgres dbname=my_database password=secret" -sql "SELECT * FROM my_schema.my_table"
References:
Retrieve primary key columns - Postgres
LIMIT, OFFSET, ORDER BY and Pagination in PostgreSQL
ogr2ogr - GDAL

point in-time recovery in PostgreSQL for windows

I am using archive_cleanup_command in my recovery.conf file. When I start PostgreSQL server I am getting FATAL and postgres is not starting.
FATAL: syntax error in file "recovery.conf" line 4, near token "'"
showing in pg_log
recovery.conf:
standby_mode = 'on'
primary_conninfo = 'host=localhost port=5432 user=postgres password=postgres'
restore_command = 'copy "D:\\Program Files\\WAL\\%f" "%p" '
archive_cleanup_command = 'pg_archivecleanup "D:\\Program Files\\WAL" %r'
trigger_file = 'standby.stop'
recovery_target_timeline='latest'

Slony-i replication on Postgre 9.3/slony 2.2.0/window 7

Title : slony-i replication not working.
binary paths : C:\Program Files\PostgreSQL\9.3\share
Master.txt
cluster name = testing;
node 1 admin conninfo = 'dbname=original host=localhost user=postgres password=sa';
node 2 admin conninfo = 'dbname=copy host=localhost user=postgres password=sa';
init cluster (id = 1,comment = 'Node 1 - Master');
create set (id = 1, origin = 1);
set add table (set id = 1,origin = 1,id = 1 , full qualified name = 'public.test');
store node(id = 2,event node = 1,comment = 'slave');
store path(server = 1,client = 2,conninfo = 'dbname=original host=localhost user=postgres password=sa');
store path(server = 2,client = 1,conninfo = 'dbname=copy host=localhost user=postgres password=sa');
Slave.txt
CLUSTER NAME = testing;
node 1 admin conninfo = 'dbname=original host=localhost port=5432 user=postgres password=sa';
node 2 admin conninfo = 'dbname=copy host=localhost port=5432 user=postgres password=sa';
subscribe set (id = 1,provider = 1, receiver = 2, forward = no);
error
Question
keeping waiting for event and when i tested replication not working even slony replication appear.
thanks you
You need to create a slony service on both master and slave machine.
create a file and name it slon.conf with content:
cluster_name=testing
conn_info = 'dbname = original host = localhost user = postgres password = sa port = 5432'
Then, go to command prompt and head to the postgres bin folder and type:
slon -regservice Slony-I
slon -addengine Slony-I slon.conf
slon -listengines Slony-I
This must be done in both Master and slave machine