Combining postgres query and log duration - postgresql

I am aware that you can show the duration and log queries using the configuration below in postgresql.conf
------------------------------------------------------------------------------
CUSTOMIZED OPTIONS
------------------------------------------------------------------------------
log_statement = 'all'
log_duration = on
log_line_prefix = '{"Time":[%t], Host:%h} '
And then returns logs like
{"Time":[2018-08-13 16:24:20 +08], Host:172.18.0.2} LOG: statement: SELECT "auth_user"."id", "auth_user"."password", "auth_user"."last_login", "auth_user"."is_superuser", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."date_joined" FROM "auth_user" WHERE "auth_user"."id" = 1
{"Time":[2018-08-13 16:24:20 +08], Host:172.18.0.2} LOG: duration: 7.694 ms
But can I combine the duration and statement in a single line like?
LOG: { statement: ..., duration: 7.694 ms}

The way you are logging, the statement is logged when the server starts processing it, but the duration is only known at the end of the execution.
This is why it has to be logged as two different messages.
If you use log_min_duration_statement = 0 instead, the statement is logged at the end of execution together with the duration.

Related

Postgresql pg_profile getting error while creating snapshot

I am referring https://github.com/zubkov-andrei/pg_profile for generating awr like report.
Steps which I have followed are as below :
1) Enabled below parameters inside postgresql.conf (located inside D:\Program Files\PostgreSQL\9.6\data)
track_activities = on
track_counts = on
track_io_timing = on
track_functions = on
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = 'top'
pg_stat_statements.save = off
pg_profile.topn = 20
pg_profile.retention = 7
2) Manually copied all the file beginning with pg_profile to D:\Program Files\PostgreSQL\9.6\share\extension
3) From pgAdmin4 console executed below commands successfully
CREATE EXTENSION dblink;
CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION pg_profile;
4) To see which node is already present I executed SELECT * from node_show();
which resulted in
node_name as local
connstr as dbname=postgres port=5432
enabled as true
5) To create a snapshot I executed SELECT * from snapshot('local');
but getting below error
ERROR: could not establish connection
DETAIL: fe_sendauth: no password supplied
CONTEXT: SQL statement "SELECT dblink_connect('node_connection',node_connstr)"
PL/pgSQL function snapshot(integer) line 38 at PERFORM
PL/pgSQL function snapshot(name) line 9 at RETURN
SQL state: 08001
Once I am able to generate multiple snapshot then I guess I should be able to generate report.
just use SELECT * from snapshot()
look at the code of the function. It calls the other one with node as parameter.

Firebird: Query execution time in iSQL

I would like to get query execution time in iSQL.
For instance :
SELECT * FROM students;
How do i get query execution time ?
Use SET STATS:
SQL> SET STATS;
SQL> SELECT * FROM RDB$DATABASE;
... query output removed ....
Current memory = 34490656
Delta memory = 105360
Max memory = 34612544
Elapsed time= 0.59 sec
Buffers = 2048
Reads = 17
Writes 0
Fetches = 270
SQL>

PlayFramework 2 + Ebean - raw Sql Update query - makes no effect on db

I have a play framework 2.0.4 application that wants to modify rows in db.
I need to update 'few' messages in db to status "opened" (read messages)
I did it like below
String sql = " UPDATE message SET opened = true, opened_date = now() "
+" WHERE id_profile_to = :id1 AND id_profile_from = :id2 AND opened IS NOT true";
SqlUpdate update = Ebean.createSqlUpdate(sql);
update.setParameter("id1", myProfileId);
update.setParameter("id2", conversationProfileId);
int modifiedCount = update.execute();
I have modified the postgresql to log all the queries.
modifiedCount is the actual number of modified rows - but the query is in transaction.
After the query is done in the db there is ROLLBACK - so the UPDATE is not made.
I have tried to change db to H2 - with the same result.
This is the query from postgres audit log
2012-12-18 00:21:17 CET : S_1: BEGIN
2012-12-18 00:21:17 CET : <unnamed>: UPDATE message SET opened = true, opened_date = now() WHERE id_profile_to = $1 AND id_profile_from = $2 AND opened IS NOT true
2012-12-18 00:21:17 CET : parameters: $1 = '1', $2 = '2'
2012-12-18 00:21:17 CET : S_2: ROLLBACK
..........
Play Framework documentation and Ebean docs - states that there is no transaction /if not declared or transient if needed per query/.
So... I have made the trick
Ebean.beginTransaction();
int modifiedCount = update.execute();
Ebean.commitTransaction();
Ebean.endTransaction();
Logger.info("update mod = " + modifiedCount);
But this makes no difference - the same behavior ...
Ebean.execute(update);
Again - the same ..
Next step i did - I annontated the method with
#Transactional(type=TxType.NEVER)
and
#Transactional(type=TxType.MANDATORY)
None of them made a difference.
I am so frustrated with Ebean :(
Anybody can help, please ?
BTW.
I set
Ebean.getServer(null).getAdminLogging().setDebugGeneratedSql(true);
Ebean.getServer(null).getAdminLogging().setDebugLazyLoad(true);
Ebean.getServer(null).getAdminLogging().setLogLevel(LogLevel.SQL);
to see in Play console the query - other queries are logged - this update - not
just remove the initial space...Yes..I couldn't believe it either...
change from " UPDATE... to "UPDATE...
And thats all...
i think you have to use raw sql instead of createSqlUpdate statement.

Why are long-running queries blank in postgresql log?

I'm running a log (log_min_duration_statement = 200) to analyse some slow queries in PostgreSQL 9.0 but the statements for worst queries aren't being logged. Is there any way I can find out what the queries actually are?
(some values replaced with *** for brevity and privacy.)
2012-06-29 02:10:39 UTC LOG: duration: 266.658 ms statement: SELECT *** FROM "oauth_accesstoken" WHERE "oauth_accesstoken"."token" = E'***'
2012-06-29 02:10:40 UTC LOG: duration: 1797.400 ms statement:
2012-06-29 02:10:49 UTC LOG: duration: 1670.132 ms statement:
2012-06-29 02:10:50 UTC LOG: duration: 354.336 ms statement: SELECT *** FROM ***
...
There are some log file destination options in postgresql.conf, as shown below. I suggest to use csvlog.
log_destination = 'csvlog'
logging_collector = on
log_directory = '/var/applog/pg_log/1922/'
log_rotation_age = 1d
log_rotation_size = 10MB
log_statement = 'ddl' # none, ddl, mod, all
log_min_duration_statement = 200
After making any changes, you need to reload the postgresql.conf file.
It turns out because I was keeping an eye on the logs with tail -f path | grep 'duration .+ ms' any statement starting with a newline was not visible. I was mainly doing this to highlight the duration string.

Deadlock between select and truncate (postgresql)

Table output_values_center1 (and some other) inherits output_values. Periodically I truncate table output_values_center1 and load new data (in one transaction). In that time user can request some data and he got error message. Why it ever happens (select query requests only one record) and how to avoid such problem:
2010-05-19 14:43:17 UTC ERROR: deadlock detected
2010-05-19 14:43:17 UTC DETAIL: Process 25972 waits for AccessShareLock on relation 2495092 of database 16385; blocked by process 26102.
Process 26102 waits for AccessExclusiveLock on relation 2494865 of database 16385; blocked by process 25972.
Process 25972: SELECT * FROM "output_values" WHERE ("output_values".id = 122312) LIMIT 1
Process 26102: TRUNCATE TABLE "output_values_center1"
"TRUNCATE acquires an ACCESS EXCLUSIVE lock on each table it operates on, which blocks all other concurrent operations on the table. If concurrent access to a table is required, then the DELETE command should be used instead."
Obviously it's not clear if you just look at the "manpage" linked above why querying the parent table affects its descendant. The following excerpt from the "manpage" for the SELECT command clarifies it:
"If ONLY is specified, only that table is scanned. If ONLY is not specified, the table and any descendant tables are scanned."
I'd try this (in pseudocode) for truncating:
#define NOWAIT_TIMES 100
#define SLEEPTIME_USECS (1000*100)
for ( i=0; ; i++ ) {
ok = query('start transaction');
if ( !ok ) raise 'Unable to start transaction!';
queries = array(
'lock table output_values in access exclusive mode nowait',
'truncate output_values_center1',
'commit'
);
if ( i>NOWAIT_TIMES ) {
// we will wait this time, as we tried NOWAIT_TIMES and failed
queries[0] = 'lock table output_values in access exclusive mode';
}
foreach q in queries {
ok = query(q);
if (!ok) break;
}
if (!ok) {
query('rollback');
usleep(SLEEPTIME_USECS);
} else {
break;
};
};
This way you'll be safe from deadlocks, as parent table will be exclusively locked. A user will just block for a fraction of second while truncate runs and will automatically resume after commit.
But be prepared that this can run several seconds on busy server as when table is in use then lock will fail and be retried.