Why are long-running queries blank in postgresql log? - postgresql

I'm running a log (log_min_duration_statement = 200) to analyse some slow queries in PostgreSQL 9.0 but the statements for worst queries aren't being logged. Is there any way I can find out what the queries actually are?
(some values replaced with *** for brevity and privacy.)
2012-06-29 02:10:39 UTC LOG: duration: 266.658 ms statement: SELECT *** FROM "oauth_accesstoken" WHERE "oauth_accesstoken"."token" = E'***'
2012-06-29 02:10:40 UTC LOG: duration: 1797.400 ms statement:
2012-06-29 02:10:49 UTC LOG: duration: 1670.132 ms statement:
2012-06-29 02:10:50 UTC LOG: duration: 354.336 ms statement: SELECT *** FROM ***
...

There are some log file destination options in postgresql.conf, as shown below. I suggest to use csvlog.
log_destination = 'csvlog'
logging_collector = on
log_directory = '/var/applog/pg_log/1922/'
log_rotation_age = 1d
log_rotation_size = 10MB
log_statement = 'ddl' # none, ddl, mod, all
log_min_duration_statement = 200
After making any changes, you need to reload the postgresql.conf file.

It turns out because I was keeping an eye on the logs with tail -f path | grep 'duration .+ ms' any statement starting with a newline was not visible. I was mainly doing this to highlight the duration string.

Related

pg_badger report is empty?

Using pg badger to generate daily reports of the database but the report is empty ? ANy ideas why?
thanks
log_min_duration_statement = -1
log_duration = on
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h'
log_checkpoints = 'on'
log_lock_waits = 'on'
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = 'default'
lc_messages='en_US.UTF-8'
-bash-4.2$ pgbadger -f stderr -d dbdev /var/lib/pgsql/13/data/log/postgresql-*.log -x html -o
/var/lib/pgsql/13/data/log/pgbadger_dbdevv_date +\%F.html
[========================>] Parsed 23526878 bytes of 23526878
(100.00%), queries: 0, events: 0 LOG: Ok, generating html report...
The pgBadger documentation states:
You must first enable SQL query logging to have something to parse:
log_min_duration_statement = 0

Postgresql pg_profile getting error while creating snapshot

I am referring https://github.com/zubkov-andrei/pg_profile for generating awr like report.
Steps which I have followed are as below :
1) Enabled below parameters inside postgresql.conf (located inside D:\Program Files\PostgreSQL\9.6\data)
track_activities = on
track_counts = on
track_io_timing = on
track_functions = on
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = 'top'
pg_stat_statements.save = off
pg_profile.topn = 20
pg_profile.retention = 7
2) Manually copied all the file beginning with pg_profile to D:\Program Files\PostgreSQL\9.6\share\extension
3) From pgAdmin4 console executed below commands successfully
CREATE EXTENSION dblink;
CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION pg_profile;
4) To see which node is already present I executed SELECT * from node_show();
which resulted in
node_name as local
connstr as dbname=postgres port=5432
enabled as true
5) To create a snapshot I executed SELECT * from snapshot('local');
but getting below error
ERROR: could not establish connection
DETAIL: fe_sendauth: no password supplied
CONTEXT: SQL statement "SELECT dblink_connect('node_connection',node_connstr)"
PL/pgSQL function snapshot(integer) line 38 at PERFORM
PL/pgSQL function snapshot(name) line 9 at RETURN
SQL state: 08001
Once I am able to generate multiple snapshot then I guess I should be able to generate report.
just use SELECT * from snapshot()
look at the code of the function. It calls the other one with node as parameter.

Combining postgres query and log duration

I am aware that you can show the duration and log queries using the configuration below in postgresql.conf
------------------------------------------------------------------------------
CUSTOMIZED OPTIONS
------------------------------------------------------------------------------
log_statement = 'all'
log_duration = on
log_line_prefix = '{"Time":[%t], Host:%h} '
And then returns logs like
{"Time":[2018-08-13 16:24:20 +08], Host:172.18.0.2} LOG: statement: SELECT "auth_user"."id", "auth_user"."password", "auth_user"."last_login", "auth_user"."is_superuser", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."date_joined" FROM "auth_user" WHERE "auth_user"."id" = 1
{"Time":[2018-08-13 16:24:20 +08], Host:172.18.0.2} LOG: duration: 7.694 ms
But can I combine the duration and statement in a single line like?
LOG: { statement: ..., duration: 7.694 ms}
The way you are logging, the statement is logged when the server starts processing it, but the duration is only known at the end of the execution.
This is why it has to be logged as two different messages.
If you use log_min_duration_statement = 0 instead, the statement is logged at the end of execution together with the duration.

Reduce postgresql log verbosity

Every time I invoke a parameterized query I get too much output in the log file. For example, when inserting 3 users into a table I get the following log output:
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms parse <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms bind <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT DETAIL: parameters: $1 = 'guest', $2 = 'user', $3 = '123', $4 = 'admin', $5 = 'admin', $6 = '123', $7 = 'mark', $8 = 'power user', $9 = '123'
2013-10-29 06:01:43 EDT LOG: execute <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT DETAIL: parameters: $1 = 'guest', $2 = 'user', $3 = '123', $4 = 'admin', $5 = 'admin', $6 = '123', $7 = 'mark', $8 = 'power user', $9 = '123'
2013-10-29 06:01:43 EDT LOG: duration: 4.000 ms
Notice, that the whole query appears three times - for parse, for bind and for execute. And the complete set of parameters appears twice - for bind and for execute.
Note, that this extra verbosity is only present when running parameterized queries.
Here is my config:
C:\Program Files\PostgreSQL\9.2\data>findstr log_ postgresql.conf
# "postgres -c log_connections=on". Some parameters can be changed at run time
log_destination = 'stderr' # Valid values are combinations of
log_directory = 'pg_log' # directory where log files are written,
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
log_file_mode = 0600 # creation mode for log files,
#log_truncate_on_rotation = off # If on, an existing log file with the
#log_rotation_age = 1d # Automatic rotation of logfiles will
#log_rotation_size = 10MB # Automatic rotation of logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
log_min_messages = notice # values in order of decreasing detail:
log_min_error_statement = error # values in order of decreasing detail:
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
#log_lock_waits = off # log lock waits >= deadlock_timeout
log_statement = 'all' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
log_timezone = 'US/Eastern'
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
C:\Program Files\PostgreSQL\9.2\data>
So, my question is how can I reduce the verbosity of the log for parameterized queries without affecting the other queries? Ideally, I would like to have the query SQL and its parameters logged just once.
I don't think that's possible out of the box. You could write a logging hook and filter log entries.
The same issue I was facing before Above config which you have mentioned just change log_min_duration_statement=-1 ( disable) the query will be once only logged
BUT you have enabled the duration it will just log duration 3 times but not the query
Like
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms
2013-10-29 06:01:43 EDT LOG: duration: 0.000 ms
2013-10-29 06:01:43 EDT LOG: execute <unnamed>: INSERT INTO users (login,role,password) VALUES
($1,$2,$3)
,($4,$5,$6)
,($7,$8,$9)
2013-10-29 06:01:43 EDT DETAIL: parameters: $1 = 'guest', $2 = 'user', $3 = '123', $4 = 'admin', $5 = 'admin', $6 = '123', $7 = 'mark', $8 = 'power user', $9 = '123'
2013-10-29 06:01:43 EDT LOG: duration: 4.000 ms

PlayFramework 2 + Ebean - raw Sql Update query - makes no effect on db

I have a play framework 2.0.4 application that wants to modify rows in db.
I need to update 'few' messages in db to status "opened" (read messages)
I did it like below
String sql = " UPDATE message SET opened = true, opened_date = now() "
+" WHERE id_profile_to = :id1 AND id_profile_from = :id2 AND opened IS NOT true";
SqlUpdate update = Ebean.createSqlUpdate(sql);
update.setParameter("id1", myProfileId);
update.setParameter("id2", conversationProfileId);
int modifiedCount = update.execute();
I have modified the postgresql to log all the queries.
modifiedCount is the actual number of modified rows - but the query is in transaction.
After the query is done in the db there is ROLLBACK - so the UPDATE is not made.
I have tried to change db to H2 - with the same result.
This is the query from postgres audit log
2012-12-18 00:21:17 CET : S_1: BEGIN
2012-12-18 00:21:17 CET : <unnamed>: UPDATE message SET opened = true, opened_date = now() WHERE id_profile_to = $1 AND id_profile_from = $2 AND opened IS NOT true
2012-12-18 00:21:17 CET : parameters: $1 = '1', $2 = '2'
2012-12-18 00:21:17 CET : S_2: ROLLBACK
..........
Play Framework documentation and Ebean docs - states that there is no transaction /if not declared or transient if needed per query/.
So... I have made the trick
Ebean.beginTransaction();
int modifiedCount = update.execute();
Ebean.commitTransaction();
Ebean.endTransaction();
Logger.info("update mod = " + modifiedCount);
But this makes no difference - the same behavior ...
Ebean.execute(update);
Again - the same ..
Next step i did - I annontated the method with
#Transactional(type=TxType.NEVER)
and
#Transactional(type=TxType.MANDATORY)
None of them made a difference.
I am so frustrated with Ebean :(
Anybody can help, please ?
BTW.
I set
Ebean.getServer(null).getAdminLogging().setDebugGeneratedSql(true);
Ebean.getServer(null).getAdminLogging().setDebugLazyLoad(true);
Ebean.getServer(null).getAdminLogging().setLogLevel(LogLevel.SQL);
to see in Play console the query - other queries are logged - this update - not
just remove the initial space...Yes..I couldn't believe it either...
change from " UPDATE... to "UPDATE...
And thats all...
i think you have to use raw sql instead of createSqlUpdate statement.