What does the slash denote in postgresql virtual transaction identifiers? - postgresql

For example, my postgresql-main.log contains entries such as the following:
process 20234 session 5901502e.4f0a vtransaction transaction 0 : LOG: connection received: host=[local]
process 20234 session 5901502e.4f0a vtransaction 2/1 transaction 0 : LOG: connection authorized: user=postgres database=postgres
process 20234 session 5901502e.4f0a vtransaction 2/2 transaction 0 : LOG: statement: SELECT d.datname as "Name",
pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
d.datcollate as "Collate",
d.datctype as "Ctype",
pg_catalog.array_to_string(d.datacl, E'\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;
process 20234 session 5901502e.4f0a vtransaction transaction 0 : LOG: disconnection: session time: 0:00:00.004 user=postgres database=postgres host=[local]
process 20237 session 5901502f.4f0d vtransaction transaction 0 : LOG: connection received: host=[local]
process 20237 session 5901502f.4f0d vtransaction 2/3 transaction 0 : LOG: connection authorized: user=postgres database=postgres
2017-04-26 19:58:07 MDT process 20237 remote [local] session 5901502f.4f0d vtransaction 2/4 transaction 0 : LOG: statement: SELECT d.datname as "Name",
pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
d.datcollate as "Collate",
d.datctype as "Ctype",
pg_catalog.array_to_string(d.datacl, E'\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;
process 20237 session 5901502f.4f0d vtransaction transaction 0 : LOG: disconnection: session time: 0:00:00.002 user=postgres database=postgres host=[local]
Is there any relation between virtual transaction ids 2/1 and 2/2 above since they share the prefix 2/?

Related

create tablespace problem in db2 HADR environment

We have Db2 10.5.0.7 on centos 6.9 and TSAMP 3.2 as our high availability solution, when we create a tablespace in primary database we encounter the following errors in the standby:
2019-08-31-08.47.32.164952+270 I87056E2779 LEVEL: Error (OS)
PID : 4046 TID : 47669095425792 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SAMDB
APPHDL : 0-8 APPID: *LOCAL.DB2.190725231126
HOSTNAME: samdb-b EDUID : 155 EDUNAME: db2redom
(SAMDB) 0 FUNCTION: DB2 Common, OSSe, ossGetDiskInfo, probe:130
MESSAGE : ECF=0x90000001=-1879048191=ECF_ACCESS_DENIED
Access denied CALLED : OS, -, fopen OSERR: EACCES (13) DATA #1 : String, 12 bytes /proc/mounts DATA #2 :
String, 25 bytes /dbdata1/samdbTsContainer DATA #3 : unsigned integer,
8 bytes
2019-08-31-08.47.32.185625+270 E89836E494 LEVEL: Error PID
: 4046 TID : 47669095425792 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SAMDB
APPHDL : 0-8 APPID: *LOCAL.DB2.190725231126
HOSTNAME: samdb-b EDUID : 155 EDUNAME: db2redom
(SAMDB) 0 FUNCTION: DB2 UDB, high avail services,
sqlhaGetLocalDiskInfo, probe:9433 MESSAGE :
ECF=0x90000001=-1879048191=ECF_ACCESS_DENIED
Access denied
2019-08-31-08.47.32.186258+270 E90331E484 LEVEL: Error PID
: 4046 TID : 47669095425792 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SAMDB
APPHDL : 0-8 APPID: *LOCAL.DB2.190725231126
HOSTNAME: samdb-b EDUID : 155 EDUNAME: db2redom
(SAMDB) 0 FUNCTION: DB2 UDB, high avail services, sqlhaCreateMount,
probe:9746 RETCODE : ZRC=0x827300AA=-2106392406=HA_ZRC_FAILED "SQLHA
API call error"
2019-08-31-08.47.32.186910+270 I90816E658 LEVEL: Error PID
: 4046 TID : 47669095425792 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SAMDB
APPHDL : 0-8 APPID: *LOCAL.DB2.190725231126
HOSTNAME: samdb-b EDUID : 155 EDUNAME: db2redom
(SAMDB) 0 FUNCTION: DB2 UDB, buffer pool services,
sqlbDMSAddContainerRequest, probe:812 MESSAGE :
ZRC=0x827300AA=-2106392406=HA_ZRC_FAILED "SQLHA API call error" DATA
: String, 36 bytes Cluster add mount operation failed: DATA #2 : String, 37 bytes /dbdata1/samdbTsContainer/TSPKGCACH.1 DATA #3 :
String, 8 bytes SAMDB
2019-08-31-08.47.32.190537+270 E113909E951 LEVEL: Error PID
: 4046 TID : 47669095425792 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SAMDB
APPHDL : 0-8 APPID: *LOCAL.DB2.190725231126
HOSTNAME: samdb-b EDUID : 155 EDUNAME: db2redom
(SAMDB) 0 FUNCTION: DB2 UDB, buffer pool services,
sqlblog_reCreatePool, probe:3134 MESSAGE : ADM6106E Table space
"TSPKGCACH" (ID = "49") could not be created
during the rollforward operation. The most likely cause is that there
is not enough space to create the containers associated with the
table space. Connect to the database after the rollforward operation
completes and use the SET TABLESPACE CONTAINERS command to assign
containers to the table space. Then, issue another ROLLFORWARD
DATABASE command to complete recovery of this table space.
2019-08-31-08.47.32.200949+270 E114861E592 LEVEL: Error PID
: 4046 TID : 47669095425792 PROC : db2sysc 0
INSTANCE: db2inst1 NODE : 000 DB : SAMDB
APPHDL : 0-8 APPID: *LOCAL.DB2.190725231126
HOSTNAME: samdb-b EDUID : 155 EDUNAME: db2redom
(SAMDB) 0 FUNCTION: DB2 UDB, buffer pool services, sqlbIncPoolState,
probe:4628 MESSAGE : ADM12512W Log replay on the HADR standby has
stopped on table space
"TSPKGCACH" (ID "49") because it has been put into "ROLLFORWARD
PENDING" state.
There is free space available for the database and the specified path (/dbdata1/samdbTsContainer) exists on the server and we can create file manually on it .
all settings are equivalent on the primary and standby. db2inst1 is the owner of /dbdata1/samdbTsContainer and permission is drwxr-xr-x, the result of su - db2inst1 “ulimit -Hf” is unlimited and ext3 is file system type and create tablespace statement is as follows:
CREATE LARGE TABLESPACE TSPKGCACH IN DATABASE PARTITION GROUP IBMDEFAULTGROUP PAGESIZE 8 K MANAGED BY DATABASE USING (FILE '/dbdata1/samdbTsContainer/TSPKGCACH.1' 5120) ON DBPARTITIONNUM (0) EXTENTSIZE 64 PREFETCHSIZE 64 BUFFERPOOL BP8KPKGCACH OVERHEAD 10.5 TRANSFERRATE 0.14 DATA TAG NONE NO FILE SYSTEM CACHING;
SELinux is disabled and the sector size is 512 bytes. The mount options are as follws:
/dev/sdf1 /dbdata1 ext3 rw,relatime,errors=continue,barrier=1,data=ordered 0 0
We can not recreate the problem sometimes this problem occur and we don't know the reason of it, but the problem remains until server reboot.
When we restart the standby server problem solves but we need to drop the tablespace and recreate it, is there any idea for this problem?
From the error it looks to me that problem is not with the file access itself but rather /proc/mounts, which Db2 uses to do the mapping between containers and filesystems (to know e.g. the FS type). Hence I suggest to test whether all:
cat /proc/mounts
cat /proc/self/mounts
mount
work OK run as Db2 instance owner ID (db2inst1). If not, this implies some odd OS issue that Db2 is a victim of and we would need more OS diagnostics (e.g strace from the cat /proc/mounts command) to understand it.
Edit:
To confirm this theory I've run a quick test with Db2 11.1. Note this must be TSA-controlled environment for Db2 to follow sqlhaCreateMount code path (because if this will be a separate mount, Db2 will add it to the TSA resource model)
On both primary and standby:
mkdir /db2data
chown db2v111:db2iadm /db2data
then on standby:
chmod o-rx /proc
(couldn't find a "smarter" way to hit EACCES on mount info).
When I will run on primary:
db2 "create tablespace test managed by database using (file '/db2data/testts' 100 M)"
it completes fine on primary but standby hits exactly the error you are seeing:
2019-06-21-03.00.37.087693+120 I1774E2661 LEVEL: Error (OS)
PID : 10379 TID : 46912992438016 PROC : db2sysc 0
INSTANCE: db2v111 NODE : 000 DB : SAMPLE
APPHDL : 0-4492 APPID: *LOCAL.DB2.190621005919
HOSTNAME: rhel-hadrs.kkuduk.com
EDUID : 61 EDUNAME: db2redom (SAMPLE) 0
FUNCTION: DB2 Common, OSSe, ossGetDiskInfo, probe:130
MESSAGE : ECF=0x90000001=-1879048191=ECF_ACCESS_DENIED
Access denied
CALLED : OS, -, fopen OSERR: EACCES (13)
DATA #1 : String, 12 bytes
/proc/mounts
DATA #2 : String, 8 bytes
/db2data
DATA #3 : unsigned integer, 8 bytes
1
CALLSTCK: (Static functions may not be resolved correctly, as they are resolved to the nearest symbol)
[0] 0x00002AAAB9CFD84B /home/db2v111/sqllib/lib64/libdb2osse.so.1 + 0x23F84B
[1] 0x00002AAAB9CFED51 ossLogSysRC + 0x101
[2] 0x00002AAAB9D19647 ossGetDiskInfo + 0xF07
[3] 0x00002AAAAC52402C _Z21sqlhaGetLocalDiskInfoPKcjPcjS1_jS1_ + 0x26C
[4] 0x00002AAAAC523C5F _Z16sqlhaGetDiskInfoPKcS0_jPcjS1_jS1_ + 0x29F
[5] 0x00002AAAAC521CA0 _Z16sqlhaCreateMountPKcS0_m + 0x350
[6] 0x00002AAAACDE8D5D _Z26sqlbDMSAddContainerRequestP12SQLB_POOL_CBP16SQLB_POOLCONT_CBP12SQLB_GLOBALSP14SQLB_pfParIoCbbm + 0x90D
[7] 0x00002AAAACE14FF9 _Z29sqlbDoDMSAddContainerRequestsP12SQLB_POOL_CBP16SQLB_POOLCONT_CBjP26SQLB_AS_CONT_AND_PATH_INFOP12SQLB_GLOBALS + 0x2D9
[8] 0x00002AAAACE0C20F _Z17sqlbDMSCreatePoolP12SQLB_POOL_CBiP16SQLB_POOLCONT_CBbP12SQLB_GLOBALS + 0x103F
[9] 0x00002AAAACDB1EAC _Z13sqlbSetupPoolP12SQLB_GLOBALSP12SQLB_POOL_CBPKciiiihiP19SQLB_CONTAINER_SPECllblsib + 0xE4C
-> it is an issue with /proc/mounts access, not the target path itself, where i can write with no issues:
[db2v111#rhel-hadrs ~]$ echo "test" > /db2data/testfile
If that would be path access issue:
chmod o+rx /proc
chmod a-rw /db2data
then an error during the "CREATE TABLESPACE" redo on standby will be different:
2019-06-21-03.07.29.175486+120 I35023E592 LEVEL: Error
PID : 10379 TID : 46912992438016 PROC : db2sysc 0
INSTANCE: db2v111 NODE : 000 DB : SAMPLE
APPHDL : 0-4492 APPID: *LOCAL.DB2.190621005919
HOSTNAME: rhel-hadrs.kkuduk.com
EDUID : 61 EDUNAME: db2redom (SAMPLE) 0
FUNCTION: DB2 UDB, buffer pool services, sqlbCreateAndLockParent, probe:918
MESSAGE : ZRC=0x8402001E=-2080243682=SQLB_CONTAINER_NOT_ACCESSIBLE
"Container not accessible"
DATA #1 : <preformatted>
Failed at directory /db2data.
2019-06-21-03.07.29.175799+120 I35616E619 LEVEL: Severe
PID : 10379 TID : 46912992438016 PROC : db2sysc 0
INSTANCE: db2v111 NODE : 000 DB : SAMPLE
APPHDL : 0-4492 APPID: *LOCAL.DB2.190621005919
HOSTNAME: rhel-hadrs.kkuduk.com
EDUID : 61 EDUNAME: db2redom (SAMPLE) 0
FUNCTION: DB2 UDB, buffer pool services, sqlbCreateAndLockParent, probe:722
MESSAGE : ZRC=0x8402001E=-2080243682=SQLB_CONTAINER_NOT_ACCESSIBLE
"Container not accessible"
DATA #1 : <preformatted>
Failed to create a portion of the path /db2data/testts2
(few more errors follow pointing directly to the permissions on /db2data)
This proves it is the /proc access issue and you need to debug it with your OS team. Perhaps /proc gets completely unmounted?
In any case, the actual issue is db2sysc process hitting EACCES running fopen on /proc/mounts and you need debug it further with OS team.
Edit:
When it comes to the debugging and proving the error is returned by the OS, we would have to trace open() syscalls done by Db2. Strace can do that, but overhead is too high for a production system. If you can get SystemTap installed on the system, I suggest a script like this (this is a basic version):
probe nd_syscall.open.return
{
if ( user_string( #entry( pointer_arg(1) ) ) =~ ".*mounts")
{
printf("exec: %s pid: %d uid: %d (euid: %d) gid: %d (egid: %d) run open(%s) rc: %d\n", execname(), pid(), uid(), euid(), gid(), egid(), user_string(#entry(pointer_arg(1)), "-"), returnval() )
}
}
it uses nd_syscall probe, so it will work even without kernel debuginfo package. You can run it like this:
$ stap open.stap
exec: cat pid: 24159 uid: 0 (euid: 0) gid: 0 (egid: 0) run open(/proc/mounts) rc: 3
exec: cat pid: 24210 uid: 0 (euid: 0) gid: 0 (egid: 0) run open(/proc/mounts) rc: 3
exec: cat pid: 24669 uid: 1111 (euid: 1111) gid: 1001 (egid: 1001) run open(/proc/mounts) rc: 3
exec: cat pid: 24734 uid: 1111 (euid: 1111) gid: 1001 (egid: 1001) run open(/proc/mounts) rc: -13
exec: cat pid: 24891 uid: 1111 (euid: 1111) gid: 1001 (egid: 1001) run open(/proc/self/mounts) rc: -13
exec: ls pid: 24971 uid: 1111 (euid: 1111) gid: 1001 (egid: 1001) run open(/proc/mounts) rc: -13
-> at some point I've revoked access from /proc and open attempt failed with -13 (EACCES). You just need to enable it on the system when you see the error and see if something is logged when Db2 fails.

psql: intermittent segmentation fault: server closed the connection unexpectedly

I looked at similar-sounding questions but none seemed to address my case:
On Mac OS Siera 16GB RAM, localhost (no other postgres running anywhere)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
The logs say:
2019-03-23 08:12:04.076 MDT [841] LOG: server process (PID 1175) was terminated by signal 11: Segmentation fault
2019-03-23 07:13:10.459 MDT [841] LOG: terminating any other active
server processes 2019-03-23 07:13:10.459 MDT [951] WARNING:
terminating connection because of crash of another server process
2019-03-23 07:13:10.459 MDT [951] DETAIL: The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory. 2019-03-23 07:13:10.459 MDT [951] HINT: In a
moment you should be able to reconnect to the database and repeat your
command. 2019-03-23 07:13:10.460 MDT [980] FATAL: the database system
is in recovery mode 2019-03-23 07:13:10.461 MDT [841] LOG: all server
processes terminated; reinitializing 2019-03-23 07:13:10.470 MDT [981]
LOG: database system was interrupted; last known up at 2019-03-23
07:06:47 MDT 2019-03-23 07:13:10.744 MDT [981] LOG: database system
was not properly shut down; automatic recovery in progress 2019-03-23
07:13:10.746 MDT [981] LOG: redo starts at 28/15BF74F0 2019-03-23
07:13:10.746 MDT [981] LOG: invalid record length at 28/15BF7528:
wanted 24, got 0 2019-03-23 07:13:10.746 MDT [981] LOG: redo done at
28/15BF74F0 2019-03-23 07:13:10.755 MDT [841] LOG: database system is
ready to accept connections
PSQL version:
psql --version
psql (PostgreSQL) 11.1
Happens in both psql terminal and pgAdmin. No CPU or memory spikes when this happens.
It doesn't happen on simple result sets. See this example: it's the same query, the first time returning a count, the second time returning rows (which triggers the error):
shill=# with yards_manual as (
select device_id,loc, sum(sq_meters)*10.7639 as manual_yard_sq_ft from device d
inner join zones z on (z.device_id=d.id)
where z.enabled and z.sq_meters<46 or z.sq_meters>47
group by 1,2
)
select count(device_id) from yards_manual;
count
-------
84983
shill=# with yards_manual as (
shill(# select device_id,loc, sum(sq_meters)*10.7639 as manual_yard_sq_ft from device d
shill(# inner join zones z on (z.device_id=d.id)
shill(# where z.enabled and z.sq_meters<46 or z.sq_meters>47 --and z.crop_type in ('WARM_SEASON_GRASS','COOL_SEASON_GRASS')
shill(# group by 1,2
shill(# )
shill-#
shill-# select distinct device_id, y.manual_yard_sq_ft, build_area_ft2 , prop_area_ft2,(prop_area_ft2-build_area_ft2) as gis_yard_sq_ft2 --, st_npoints(property_geom) as corners
shill-# from yards_manual y inner join yards b on st_contains(b.property_geom,y.loc)
shill-# where (prop_area_ft2-build_area_ft2)>0 and (prop_area_ft2-build_area_ft2)<20000
shill-# ;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Although, this last query sometimes returns. Once it errors-out, it always errors out until I sart/stop the db. But sarting/stopping does not always work. I have retried restarting postgres, backing up and restoring the database, to no avail. The problem just started happening. VACCUUM FULL worked fine, error still happens. The db is 24GB.
Here is the same query now randomly returning:
device_id | manual_yard_sq_ft | build_area_ft2 | prop_area_ft2 | gis_yard_sq_ft2
----------+-------------------+------------------+------------------+------------------
0022682e | 3999.9944068 | 1666.25757779497 | 12948.051385913 | 11281.793808118
002a4379 | 1934.99812741536 | 2907.60847006035 | 15872.352961764 | 12964.7444917037
002adeb4 | 1599.9984516096 | 2856.54321331877 | 9800.49184470172 | 6943.94863138295
But when I ran it a second time, it errored out as described above.
Here's the SQL execution plan:
Unique (cost=137590686.48..137602981.21 rows=819649 width=548)
Output: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, ((b.prop_area_ft2 - b.build_area_ft2))
CTE yards_manual
-> Finalize GroupAggregate (cost=163766.01..227836.10 rows=519752 width=77)
Output: z.device_id, d.loc, (sum(z.sq_meters) * '10.7639'::double precision)
Group Key: z.device_id, d.loc
-> Gather Merge (cost=163766.01..218090.75 rows=433126 width=77)
Output: z.device_id, d.loc, (PARTIAL sum(z.sq_meters))
Workers Planned: 2
-> Partial GroupAggregate (cost=162765.98..167097.24 rows=216563 width=77)
Output: z.device_id, d.loc, PARTIAL sum(z.sq_meters)
Group Key: z.device_id, d.loc
-> Sort (cost=162765.98..163307.39 rows=216563 width=77)
Output: z.device_id, d.loc, z.sq_meters
Sort Key: z.device_id, d.loc
-> Parallel Hash Join (cost=8564.46..133948.71 rows=216563 width=77)
Output: z.device_id, d.loc, z.sq_meters
Hash Cond: ((z.device_id)::text = (d.id)::text)
-> Parallel Seq Scan on public.zones z (cost=0.00..118450.79 rows=216563 width=45)
Output: z.device_id, z.sq_meters
Filter: ((z.enabled AND (z.sq_meters < '46'::double precision)) OR (z.sq_meters > '47'::double precision))
-> Parallel Hash (cost=5648.76..5648.76 rows=120376 width=69)
Output: d.loc, d.id
-> Parallel Seq Scan on public.device d (cost=0.00..5648.76 rows=120376 width=69)
Output: d.loc, d.id
-> Sort (cost=137362850.38..137364899.50 rows=819649 width=548)
Output: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, ((b.prop_area_ft2 - b.build_area_ft2))
Sort Key: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, ((b.prop_area_ft2 - b.build_area_ft2))
-> Nested Loop (cost=0.41..136878917.80 rows=819649 width=548)
Output: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, (b.prop_area_ft2 - b.build_area_ft2)
-> CTE Scan on yards_manual y (cost=0.00..10395.04 rows=519752 width=556)
Output: y.device_id, y.loc, y.manual_yard_sq_ft
-> Index Scan using prop_geom_idx on public.yards b (cost=0.41..263.31 rows=2 width=173)
Output: b.block_id, b.property_geom, b.building_geom, b.prop_area_ft2, b.build_area_ft2, b.yard_area_ft, b.vegetation, b.yard_id
Index Cond: (b.property_geom ~ y.loc)
Filter: (((b.prop_area_ft2 - b.build_area_ft2) > '0'

PgBouncer and auth to PostgreSQL

pgbouncer version 1.7.2
psql (9.5.6)
I try use auth_hba_file (/var/lib/pgsql/9.5/data/pg_hba.conf) in PgBouncer.
Config pgbouncer.ini
postgres = host=localhost port=5432 dbname=postgres user=postgres
test = host=localhost port=5432 dbname=test user=test
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_type = hba
auth_hba_file = /var/lib/pgsql/9.5/data/pg_hba.conf
admin_users = postgres
stats_users = stats, postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20
cat pg_hba.conf | grep -v "#" | grep -v "^$"
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
host test test 10.255.4.0/24 md5
psql -h 10.233.4.16 -p 5432 -U test
Password for user test:
psql (9.5.6)
Type "help" for help.
test=> \q
psql -h 10.233.4.16 -p 6432 -U test
psql: ERROR: No such user: test
tail -fn10 /var/log/pgbouncer/pgbouncer.log
LOG C-0x78f7e0: (nodb)/(nouser)#10.255.4.245:8963 closing because: No such user: test (age=0)
WARNING C-0x78f7e0: (nodb)/(nouser)#10.255.4.245:8963 Pooler Error: No such user: test
LOG C-0x78f7e0: (nodb)/(nouser)#10.255.4.245:8963 login failed: db=test user=test
But i cannot connect to postgresql (using PgBouncer) using pg_hba.conf
Can somebody help?
May you have example for use auth_hba_file.
Thanks
I changed config:
[root#dev-metrics2 pgbouncer]# cat pgbouncer.ini | grep -v ";" | grep -v "^$" | grep -v "#"
[databases]
postgres = host=localhost port=5432 dbname=postgres user=postgres
test = host=localhost port=5432 dbname=test auth_user=test
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_query = SELECT usename, passwd FROM pg_shadow WHERE usename=$1
admin_users = postgres
stats_users = stats, postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20
Drop and Create user and DB
[local]:5432 postgres#postgres # DROP DATABASE test;
DROP DATABASE
[local]:5432 postgres#postgres # DROP USER test ;
DROP ROLE
[local]:5432 postgres#postgres # CREATE USER test with password 'test';
CREATE ROLE
[local]:5432 postgres#postgres # CREATE DATABASE test with owner test;
CREATE DATABASE
PGPASSWORD=test psql -h 10.233.4.16 -p 6432 -U test
Password for user test:
psql: ERROR: Auth failed
tail -fn1 /var/log/pgbouncer/pgbouncer.log
LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
LOG C-0x17b57a0: test/test#10.255.4.245:3069 login attempt: db=test user=test tls=no
LOG C-0x17b57a0: test/test#10.255.4.245:3069 closing because: client unexpected eof (age=0)
LOG C-0x17b57a0: test/test#10.255.4.245:3070 login attempt: db=test user=test tls=no
LOG C-0x17b57a0: test/test#10.255.4.245:3070 closing because: Auth failed (age=0)
WARNING C-0x17b57a0: test/test#10.255.4.245:3070 Pooler Error: Auth failed
Work config:
cat pgbouncer.ini | grep -v ";" | grep -v "^$" | grep -v "#"
[databases]
*= port=5432 auth_user=postgres
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_query = SELECT usename, passwd FROM pg_shadow WHERE usename=$1
admin_users = postgres
stats_users = stats, postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20
Try put space
*= port=5432 auth_user=postgres # old string
* = port=5432 auth_user=postgres # new string
work for me

Reduce postgresql log verbosity

Every time I invoke a parameterized query I get too much output in the log file. For example, when inserting 3 users into a table I get the following log output:
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms parse <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms bind <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT DETAIL: parameters: $1 = 'guest', $2 = 'user', $3 = '123', $4 = 'admin', $5 = 'admin', $6 = '123', $7 = 'mark', $8 = 'power user', $9 = '123'
2013-10-29 06:01:43 EDT LOG: execute <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT DETAIL: parameters: $1 = 'guest', $2 = 'user', $3 = '123', $4 = 'admin', $5 = 'admin', $6 = '123', $7 = 'mark', $8 = 'power user', $9 = '123'
2013-10-29 06:01:43 EDT LOG: duration: 4.000 ms
Notice, that the whole query appears three times - for parse, for bind and for execute. And the complete set of parameters appears twice - for bind and for execute.
Note, that this extra verbosity is only present when running parameterized queries.
Here is my config:
C:\Program Files\PostgreSQL\9.2\data>findstr log_ postgresql.conf
# "postgres -c log_connections=on". Some parameters can be changed at run time
log_destination = 'stderr' # Valid values are combinations of
log_directory = 'pg_log' # directory where log files are written,
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
log_file_mode = 0600 # creation mode for log files,
#log_truncate_on_rotation = off # If on, an existing log file with the
#log_rotation_age = 1d # Automatic rotation of logfiles will
#log_rotation_size = 10MB # Automatic rotation of logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
log_min_messages = notice # values in order of decreasing detail:
log_min_error_statement = error # values in order of decreasing detail:
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
#log_lock_waits = off # log lock waits >= deadlock_timeout
log_statement = 'all' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
log_timezone = 'US/Eastern'
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
C:\Program Files\PostgreSQL\9.2\data>
So, my question is how can I reduce the verbosity of the log for parameterized queries without affecting the other queries? Ideally, I would like to have the query SQL and its parameters logged just once.
I don't think that's possible out of the box. You could write a logging hook and filter log entries.
The same issue I was facing before Above config which you have mentioned just change log_min_duration_statement=-1 ( disable) the query will be once only logged
BUT you have enabled the duration it will just log duration 3 times but not the query
Like
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms
2013-10-29 06:01:43 EDT LOG: execute <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT DETAIL: parameters: $1 = 'guest', $2 = 'user', $3 = '123', $4 = 'admin', $5 = 'admin', $6 = '123', $7 = 'mark', $8 = 'power user', $9 = '123'
2013-10-29 06:01:43 EDT LOG: duration: 4.000 ms

Why are long-running queries blank in postgresql log?

I'm running a log (log_min_duration_statement = 200) to analyse some slow queries in PostgreSQL 9.0 but the statements for worst queries aren't being logged. Is there any way I can find out what the queries actually are?
(some values replaced with *** for brevity and privacy.)
2012-06-29 02:10:39 UTC LOG: duration: 266.658 ms statement: SELECT *** FROM "oauth_accesstoken" WHERE "oauth_accesstoken"."token" = E'***'
2012-06-29 02:10:40 UTC LOG: duration: 1797.400 ms statement:
2012-06-29 02:10:49 UTC LOG: duration: 1670.132 ms statement:
2012-06-29 02:10:50 UTC LOG: duration: 354.336 ms statement: SELECT *** FROM ***
...
There are some log file destination options in postgresql.conf, as shown below. I suggest to use csvlog.
log_destination = 'csvlog'
logging_collector = on
log_directory = '/var/applog/pg_log/1922/'
log_rotation_age = 1d
log_rotation_size = 10MB
log_statement = 'ddl' # none, ddl, mod, all
log_min_duration_statement = 200
After making any changes, you need to reload the postgresql.conf file.
It turns out because I was keeping an eye on the logs with tail -f path | grep 'duration .+ ms' any statement starting with a newline was not visible. I was mainly doing this to highlight the duration string.