pgbouncer login rejected - postgresql

The second day I can not overcome the connection error through pgbouncer if I use auth_type = hba:
postgres=# create user monitoring with password 'monitoring';
postgres=# create database monitoring owner monitoring;
postgres=# \du+ monitoring
List of roles
Role name | Attributes | Member of | Description
------------+------------+-----------+-------------
monitoring | | {} |
postgres=# \l+ monitoring
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
------------+------------+----------+-------------+-------------+-------------------+---------+------------+-------------
monitoring | monitoring | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7861 kB | pg_default |
/var/lib/pgsql/10/data/pg_hba.conf:
# TYPE DATABASE USER ADDRESS METHOD
host monitoring monitoring 0.0.0.0/0 trust
local monitoring monitoring trust
/etc/pgbouncer/pgbouncer.ini:
pidfile = /var/run/pgbouncer/pgbouncer.pid
reserve_pool_size = 5
reserve_pool_timeout = 2
listen_port = 6432
listen_addr = *
auth_type = hba
auth_hba_file = /etc/pgbouncer/hba_bouncer.conf
auth_file = /etc/pgbouncer/userlist.txt
logfile = /var/log/pgbouncer/pgbouncer.log
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1
max_client_conn = 5000
server_idle_timeout = 30
pool_mode = transaction
server_reset_query =
admin_users = root
stats_users = root,monitoring
[databases]
* = client_encoding=UTF8 host=localhost port=5432 pool_size=1000
In pg_hba.conf of pgbouncer I also tried to specify specific addresses of interfaces of the server with mask /32, also mask /8, /16 (real mask of my network segment).
The result is only one: login rejected!
/etc/pgbouncer/hba_bouncer.conf:
host monitoring monitoring 0.0.0.0/0 trust
host monitoring monitoring 127.0.0.1/32 trust
/etc/pgbouncer/userlist.txt:
"monitoring" "monitoring"
Connection attempt:
# psql -U monitoring -p 5432 -h 127.0.0.1
psql (10.1)
Type "help" for help.
monitoring=>
# psql -U monitoring -p 6432 -h 127.0.0.1
psql: ERROR: login rejected

We have a use case similar to yours. We are running version 1.12.0 and we ran into the same issue where we also got the "ERROR: login rejected" message.
Turned out after investigation that the permissions on our pg_hba.conf for pg_bouncer was incorrect. Once we gave pgbouncer read permissions, it was working as expected. Unfortunately nothing in the higher logging that we turned on revealed this, and we happened to stumbled across this solution through testing on our own.
Ps. the password hash in the pgbouncer config we had left as "" as we're using trust on our connection. I don't think there is anything different in our config to what you posted otherwise.

Related

PostgreSQL - Etcd - Patroni Cluster Restoration Problem

I am trying to create PostgreSQL - Etcd - Patroni(PEP) cluster. There are lots of examples on the internet and I have created one and it runs perfect. Yet, this architecture should comply with my company' s backup solution which is NetApp. We are putting the database into backup mode with "SELECT pg_start_backup('test_backup', true);" and then copy all the data files to backup directory.
PEP cluster has a small problem with this solution. Taking backup is running fine, but restoration point is not that much good. In order to restore the leader of the PEP cluster I need stop the database and then move the backup files to the data directory and finally start the restoration. At this point Patroni says the restoration node is a new cluster. Here is the error:
raise PatroniFatalException('Failed to bootstrap cluster')
patroni.exceptions.PatroniFatalException: 'Failed to bootstrap cluster'
2022-04-11 12:49:29,930 INFO: No PostgreSQL configuration items changed, nothing to reload.
2022-04-11 12:49:29,942 INFO: Lock owner: None; I am pgsql_node1
2022-04-11 12:49:29,962 INFO: trying to bootstrap a new cluster
The files belonging to this database system will be owned by user "postgres".
Also, when I check the patroni cluster status I saw this:
root#4cddca032454:/data/backup# patronictl -c /etc/patroni/config.yml list
+ Cluster: pgsql (7085327534197401486) --------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------+------------+---------+---------+----+-----------+
| pgsql_node1 | 172.17.0.6 | Replica | stopped | | unknown |
| pgsql_node2 | 172.17.0.7 | Replica | running | 11 | 0 |
| pgsql_node3 | 172.17.0.8 | Replica | running | 11 | 0 |
+-------------+------------+---------+---------+----+-----------+
At this point I have a PEP cluster without a leader. So, how can I solve this issue?
(Note: The restoration node attempted to join right cluster because, before starting the restoration I check cluster status and got this result:
root#4cddca032454:/data/backup# patronictl -c /etc/patroni/config.yml list
+ Cluster: pgsql (7085327534197401486) --------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------+------------+---------+---------+----+-----------+
| pgsql_node2 | 172.17.0.7 | Replica | running | 11 | 0 |
| pgsql_node3 | 172.17.0.8 | Replica | running | 11 | 0 |
+-------------+------------+---------+---------+----+-----------+
pgsql_node1 is not there.
)
As explained here, "https://patroni.readthedocs.io/en/latest/existing_data.html#existing-data" I can create a new cluster after restoration but my priority saving the cluster. Or do I think wrong, all this steps are same with the converting a standalone PostgreSQL database to PEP cluster?
Please let me know if you need any data or something is not clear.
Here is my leader node patroni config file:
scope: "cluster"
namespace: "/cluster/"
name: 8d454a228d251
restapi:
listen: 172.17.0.2:8008
connect_address: 172.17.0.2:8008
etcd:
host: 172.17.0.2:2379
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
check_timeline: true
postgresql:
use_pg_rewind: true
remove_data_directory_on_rewind_failure: true
remove_data_directory_on_diverged_timelines: true
use_slots: true
postgresql:
listen: 0.0.0.0:5433
connect_address: 172.17.0.2:5432
use_unix_socket: true
data_dir: /data/postgresql/
bin_dir: /usr/lib/postgresql/14/bin
config_dir: /etc/postgresql/14/main
authentication:
replication:
username: "patroni_replication"
password: "123123"
superuser:
username: "patroni_superuser"
password: "123123"
parameters:
unix_socket_directories: '/var/run/postgresql/'
logging_collector: 'on'
log_directory: '/var/log/postgresql'
log_filename: 'postgresql-14-8d454a228d25.log'
restore_command: 'cp /data/backup/%f %p'
recovery_target_timeline: 'latest'
promote_trigger_file: '/tmp/promote'
Thanks!
If I understand well you want to restore the primary server (leader) restoring the data directory with a new set of backup files.
After doing the restore in data directory of leader you need to recreate the patroni cluster (remove the keys in DCS) with patronictl remove option.
Example:
stop pgsql_node2
stop pgsql_node3
stop pgsql_node1
on pgsql_node1:
patronictl -c /etc/patroni/config.yml remove <clustername>
start pgsql_node1
start pgsql_node2
start pgsql_node3

PGBouncer : Cant connect on the right db

I'm actually facing an issue. I've installed pgbouncer on a production server, on which i've a Odoo instance and postgresql as well.
Perhaps :
In my logs, i'm having this :
2018-09-10 16:39:16.389 10123 WARNING C-0x1eb5478:
(nodb)/(nouser)#unix(18272):6432 pooler error: no such database: postgres
2018-09-10 16:39:16.389 10123 LOG C-0x1eb5478: (nodb)/(nouser)#unix(18272):6432 login failed: db=postgres user=oerppreprod
Here is the actual conf of pgbouncer :
pgbouncer_archive = host=127.0.0.1 port=5432 dbname=archive
admin_users = postgres
ignore_startup_parameters = extra_float_digits
With aswell, the default config (i've only added/edited this).
Why is he trying to connect on the postgres database ?
When i go back on the previous conf (without PGBouncer, just swapping from port 6432 to 5432), everything is working ....
Any idea ?
Thanks in advance !
I had the same issue, and in my situation. Maybe it will be usefull to somebody:
I have solved this by a few steps:
At the beginning of every request - your Framework or PDO (or else) running the initial query to check if database you asking is exists in the postgres data to process you request.
I have replaced the part of line "user=project_user password=mytestpassword" from the database section of pgbouncer.ini file. As I tested, if you replace this part - then the pgbouncer will use your userlist.txt file (or your selected auth), in my case, it was the userlist.txt.
Added the line "postgres = host=127.0.0.1 port=5432 dbname=postgres"
[databases]
postgres = host=127.0.0.1 port=5432 dbname=postgres
my_database = host=127.0.0.1 port=5432 dbname=my_database
My userlist.txt file looks like this (I am using auth_type = md5, so my password was in md5):
"my_user" "md5passwordandsoelse"
I have added my admin users to my pgbouncer.ini file:
admin_users = postgres, my_user
After all manipulations I advise you to check from which user u are running queries, by usin this simple query:
select current_user;
At the end, with this query you must to receive you selected username (in my case it was - my_user)
p.s. also I must to mention, that I was using 127.0.0.1 - because my pgbouncer is installed on the same server with postgres.

Postgresql - 100% sure to use SSL connection with server?

is it possible to check (as not rooted user/SQL question) to check, if my connection from client to server uses SSL (my destination server cas uses both - secured and not secured connection)?
as of 9.5:
https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-SSL-VIEW
The pg_stat_ssl view will contain one row per backend or WAL sender
process, showing statistics about SSL usage on this connection. It can
be joined to pg_stat_activity or pg_stat_replication on the pid column
to get more details about the connection.
t=# set role notsu ;
SET
Time: 9.289 ms
t=> select * from pg_stat_ssl where pid = pg_backend_pid();
pid | ssl | version | cipher | bits | compression | clientdn
-------+-----+---------+--------+------+-------------+----------
43767 | f | | | | |
(1 row)
Time: 10.846 ms
t=> \du+ notsu
List of roles
Role name | Attributes | Member of | Description
-----------+------------+-----------+-------------
notsu | | {} |
the above shows my connection is not SSL

FATAL: GTM error, could not obtain snapshot. Current XID = 0, Autovac = 0

After running virtually any command (e.g. createdb test) from GTM host inside pgxc_ctl tool, I am getting the following error:
FATAL: GTM error, could not obtain snapshot. Current XID = 0, Autovac = 0
PostgresXL is configured and installed on 4 nodes (/etc/hosts):
172.23.102.115 coordinator001
172.23.102.116 datanode001
172.23.103.0 datanode002
172.23.102.114 gtm001
So that each can ssh into another one and pg_hba.conf contains:
host all all 0.0.0.0/0 trust
GTM node has this configuration
Would appreciate any tips or ideas where to dig next.
[edit]
Getting this error during the direct connection:
psql -h coordinator001 -p 6668 -U postgres testdb
psql: FATAL: GTM error, could not obtain snapshot. Current XID = 0, Autovac = 0

Cannot Setup Instance of Greenplum Command Center Web Application on Greenplum Open Source Database with Centos 7 Cluster

I have Greenplum Open Source Database running on a 3-node Centos 7 cluster. The database is running and I am able to connect and run queries. Installation of Greenplum Command Center 2.0 works. When I try to configure an instance for the web appliance part using
gpcmdr --setup
I get the following error:
Creating instance schema in GPDB. Please wait ...
Failed to setup Command Center instance [myCustomInstance]:
Exception encountered while fetching GPDB version info Connection error for query select version();:
FATAL: Ident authentication failed for user "gpmon"
Here is my pg_hba.conf file for testing purposes. This still generates the above error even with the host all all ::1/128 trust
# IPv6 local connections:
local all gpadmin ident
host all gpadmin 127.0.0.1/28 trust
host all gpadmin 172.17.0.1/32 trust
host all gpadmin 192.168.65.90/32 trust
host all gpadmin 192.168.122.1/32 trust
host all gpadmin ::1/128 trust
host all gpadmin fe80::210:18ff:fe94:3768/128 trust
host all gpadmin fe80::42:11ff:fea9:f1df/128 trust
host all gpadmin fe80::b84c:8bff:fe4a:5ce2/128 trust
host all gpadmin fe80::419:d7ff:fe90:6c48/128 trust
host all gpadmin fe80::c0ff:81ff:feae:c1ec/128 trust
local replication gpadmin ident
host replication gpadmin samenet trust
local gpperfmon gpmon md5
host all gpmon 0.0.0.0/0 md5
host all gpmon ::1/128 md5
host all all ::1/128 trust
Added in host all gpmon ::1/128 md5 before last line. Restarted the database and reran gpcmdr --setup. Below are the log files
gpperfmon/logs
2016-03-14 20:37:19|:-LOG: sounds like you have just upgraded your database, creating newer tables
2016-03-14 20:37:19|:-WARNING: [gpmondb.c:55] failed to execut query 'BEGIN; CREATE TABLE public.log_alert_history (LIKE gp_toolkit.__gp_log_master_ext) DISTRIBUTED BY (logtime) PARTITION BY range (logtime)(START (date '2010-01-01') END (date '2010-02-01') EVERY (interval '1 month')); COMMIT;': ERROR: relation "gp_toolkit.__gp_log_master_ext" does not exist
2016-03-14 20:37:19|:-WARNING: [gpmondb.c:1695] gpdb error ERROR: current transaction is aborted, commands ignored until end of transaction block
query: SELECT encoding FROM pg_catalog.pg_database d WHERE d.datname = 'gpperfmon'
2016-03-14 20:37:19|:-WARNING: [gpmondb.c:1769] gpdb failed to get server encoding.
pg_logs/gpdb-2016-03-14_203718.csv
2016-03-14 20:39:15.586557 EDT,"gpmon","gpperfmon",p11521,th-217848000,"[local]",,2016-03-14 20:39:15 EDT,89956,con25,cmd1,seg-1,,dx64,x89956,sx1,"LOG","00000","statement: SELECT sess_id, current_query FROM pg_stat_activity;",,,,,,"SELECT sess_id, current_query FROM pg_stat_activity;",0,,"postgres.c",1553,
2016-03-14 20:39:19.595436 EDT,"gpmon","gpperfmon",p11574,th-217848000,"[local]",,2016-03-14 20:39:19 EDT,89958,con26,cmd1,seg-1,,dx65,x89958,sx1,"LOG","00000","statement: insert into system_history select * from _system_tail;",,,,,,"insert into system_history select * from _system_tail;",0,,"postgres.c",1553,
2016-03-14 20:39:19.628287 EDT,"gpmon","gpperfmon",p11580,th-217848000,"[local]",,2016-03-14 20:39:19 EDT,89958,con26,cmd2,seg-1,slice1,dx65,x89958,sx1,"LOG","00000","statement: insert into system_history select * from _system_tail;",,,,,,"insert into system_history select * from _system_tail;",0,,"postgres.c",1096,
2016-03-14 20:39:19.681179 EDT,"gpmon","gpperfmon",p11588,th-217848000,"[local]",,2016-03-14 20:39:19 EDT,89961,con28,cmd1,seg-1,,dx66,x89961,sx1,"LOG","00000","statement: insert into queries_history select * from _queries_tail;",,,,,,"insert into queries_history select * from _queries_tail;",0,,"postgres.c",1553,
2016-03-14 20:39:19.713717 EDT,"gpmon","gpperfmon",p11594,th-217848000,"[local]",,2016-03-14 20:39:19 EDT,89961,con28,cmd2,seg-1,slice1,dx66,x89961,sx1,"LOG","00000","statement: insert into queries_history select * from _queries_tail;",,,,,,"insert into queries_history select * from _queries_tail;",0,,"postgres.c",1096,
Seems like gp_toolkit.__gp_log_master_ext doesn't exists. This table is created when gpdb generate its schema. Would you please try
"\d gp_toolkit.__gp_log_master_ext" in gpdb and see if the table is normalized?