to log and store Postgres errors? - postgresql

I have a requirement to capture all the database errors arrived while calling a procedure/function, inserting a record in a table or deleting.
Its something like whenever any errors occurs in Postgres database, that needs to be captured.
Can any one suggest me some like or any small example stating that.
Till now I was using:
Raise notice or Raise exception to raise it on console log.
But I want to capture these and store in a table(some error log table).
I have set below parameters in postgresql.conf:
log_destination = 'stderr','csvlog','syslog','eventlog'
logging_collector = on
log_filename = 'tst_log_err-%a.log'
client_min_messages = debug5
log_min_messages = debug5
log_min_error_statement = debug5
log_min_duration_statement = 300ms
log_checkpoints = on
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_lock_waits = on
log_statement = 'all'
and then created table "postgres_log" with all the specified details.
And then tried stopping and restarting local postgresql server but I got failed to restart
Kindly suggest.

https://www.postgresql.org/docs/current/static/runtime-config-logging.html
set logging_collector to on, log_destination to stderr,csvlog, log_min_error_statement to NOTICE (if you want higher then NOTICE to be saved in logs). Eg log_min_error_statement = NOTICE this will capture all you rase NOTICE, raise WARNING and so on; log_min_error_statement = WARNING this will capture all you rase WARNING, raise error and so on.
then, follow
https://www.postgresql.org/docs/current/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG
you can setup cron to load it periodically...

Related

multiple entries for synchronous_standby_names

Trying to achieve sync streaming to barman server and i need to add an entry to postgresql.conf for this parameter, which already has an entry and tried a few variations but does not work. Any ideas? Also tried '&&' but in vain
synchronous_standby_names='ANY 1 (*)',barman-wal-archive
2022-06-10 16:50:54.272 BST [11241-43] # app= LOG: syntax error in
file "/var/lib/pgsql/13/data/postgresql.conf" line 22, near token ","
2022-06-10 16:50:54.272 BST [11241-44] # app= LOG: configuration file
"/var/lib/pgsql/13/data/postgresql.conf" contains errors; no changes
were applied
The syntax you are using is not valid, and you won't be able to specify that Barman should be kept synchronous and any one of the others. The best you can do is
synchronous_standby_names = 'FIRST 2 ("barman-wal-archive", standby1, standby2, standby3)'
(You have to double quote all names that are not standard SQL identifiers, for example if they contain -.)
Then PostgreSQL will always keep Barman synchronized, as well as the first available standby server. But that won't have transactions fail if Barman is not available, which seems to be what you want.
Keep just
synchronous_standby_names='ANY 1 (*)'
and set
synchronous_commit = on
or
synchronous_commit = remote_write

What is empty statement record in PostgreSQL log?

I changed PostgreSQL cluster configuration to log ALL statements and its duration, and it work correct, but periodically I see records like this:
2020-12-08 09:31:42.175 +05 [19041:app_name] LOG: 00000: duration: 0.046 ms
2020-12-08 09:31:42.175 +05 [19041:app_name] LOCATION: exec_execute_message, postgres.c:2086
What could it be? log_line_prefix = '%m [%p:%a] '
Also, it's standby node and replicates primary.
You are seeing output from log_duration. That just emits the duration without the statement and is somewhat useless.
Set log_duration = off, log_statement = 'none' and log_min_duration_statement = 0, and you will get all statements logged along with their duration, which is usually the most useful setting.

My Postgres replication isn't functioning, see below for specific error

I have two Postgres databases set up in a Primary/Secondary configuration. I tried to setup replication between them, but have hit a road block. Where am I going wrong?
I have checked various configuration files: recovery.conf, postgresql.conf, pg_hba.conf, and all seem to be set up correctly.
This is the error I have found in the pg_log folder:
cp: cannot stat ‘/var/lib/pgsql/walfiles/00000002000001CA0000003E’: No such file or directory
cp: cannot stat ‘/var/lib/pgsql/walfiles/00000003.history’: No such file or directory
2019-04-16 16:17:19 AEST FATAL: database system identifier differs between the primary and standby
2019-04-16 16:17:19 AEST DETAIL: The primary's identifier is 6647133350114885049, the standby's identifier is 6456613398298492847.
I am using PostgreSQL 9.2.23.
This is my recovery.conf:
standby_mode = 'on'
primary_conninfo = 'host=10.201.108.25 port=5432 user=repl-master password=111222333'
restore_command = 'cp -p /var/lib/pgsql/walfiles/%f %p'
trigger_file = '/var/lib/pgsql/i_am_master.pg.trigger'
recovery_target_timeline = 'latest'
archive_cleanup_command = 'pg_archivecleanup /var/lib/pgsql/walfiles %r'
I'd expect replication from Primary to Secondary. So far, nothing.
Appreciate any input/ideas.
You didn't set up replication correctly. You cannot use pg_dump to create the replica, you have to use a physical backup technique like pg_basebackup.
See the documentation for details.
Do not use PostgreSQL 9.2, it is out of support.

postgresql 9.4 streaming replication

I have the following problem: i am trying to set up a streaming replication scenario with load balancing. I read various tutorials but i cannot find the mistake. The replication does not work. I do not have a "wal sender/receiver process". The archiving works and everytime the master restarts, the archived wal files are copied to the slave. I even do not get any error. And in configuration file(s) everything looks like fine for me, e.g. master:
wal_level = hot_standby
wal_keep_segments = 32
max_wal_senders = 5
max_replication_slots = 5
wal_sender_timeout = 60s
What irritates me the most is that there is no "wal sender process" and there is no error thrown.
Thank you for any idea,
Sven
UPDATE 1: my recovery.conf:
standby_mode = 'on'
primary_conninfo = 'host=arcserver1 port=5432 user=postgres pass=postgres'
restore_command = 'pg_standby /db/pg_archived %f %p >> /var/log/standby.log'
primary_slot_name='standby1'
and my client postgresql.conf contains:
hot_standby = on
I found the solution:i replaced pg_standby with cp, because pg_standby seems to be only for warm standby, not hot standby.

Can I log query execution time in PostgreSQL 8.4?

I want to log each query execution time which is run in a day.
For example like this,
2012-10-01 13:23:38 STATEMENT: SELECT * FROM pg_stat_database runtime:265 ms.
Please give me some guideline.
If you set
log_min_duration_statement = 0
log_statement = all
in your postgresql.conf, then you will see all statements being logged into the Postgres logfile.
If you enable
log_duration
that will also print the time taken for each statement. This is off by default.
Using the log_statement parameter you can control which type of statement you want to log (DDL, DML, ...)
This will produce an output like this in the logfile:
2012-10-01 13:00:43 CEST postgres LOG: statement: select count(*) from pg_class;
2012-10-01 13:00:43 CEST postgres LOG: duration: 47.000 ms
More details in the manual:
http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN
http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
If you want a daily list, you probably want to configure the logfile to rotate on a daily basis. Again this is described in the manual.
I believe OP was actually asking for execution duration, not the timestamp.
To include the duration in the log output, open pgsql/<version>/data/postgresql.conf, find the line that reads
#log_duration = off
and change it to
log_duration = on
If you can't find the given parameter, just add it in a new line in the file.
After saving the changes, restart the postgresql service, or just invoke
pg_ctl reload -D <path to the directory of postgresql.conf>
e.g.
pg_ctl reload -D /var/lib/pgsql/9.2/data/
to reload the configuration.
I think a better option is to enable pg_stat_statements by enabling the PG stats extension. This will help you to find the query execution time for each query nicely recorded in a view