All other database operation are running at full speed.
This on several hosts and Ubuntu 22.04 LXC containers on that we have been using otherwise very successfully for a couple of years.
Turning fsync off doesnt make any difference.
diskio and processor utilisation is minimal so that isnt it.
I tried logging with debug level 3 but could not find anything.
Listing the various process logs in sql and linux just shows processes quietly waiting for something but I cannot find out what exactly.
The template1 database is virtually empty.
2022-10-31 14:10:02.743 UTC [1086558] exodus#exodus LOG: duration: 17249.532 ms statement: CREATE DATABASE xo_dict WITH ENCODING='UTF8'
2022-10-31 14:10:11.569 UTC [1090734] exodus#exodus LOG: duration: 8010.033 ms statement: DROP DATABASE exodus2b
2022-10-31 14:10:13.359 UTC [1086558] exodus#exodus LOG: duration: 9596.890 ms statement: DROP DATABASE xo_dict
2022-10-31 14:10:15.076 UTC [1090734] exodus#exodus LOG: duration: 3491.147 ms statement: DROP DATABASE exodus3b
2022-10-31 14:10:32.291 UTC [1093962] exodus#exodus LOG: duration: 15510.507 ms statement: CREATE DATABASE exodus2b WITH ENCODING='UTF8'
2022-10-31 14:10:52.174 UTC [1093962] exodus#exodus LOG: duration: 19864.597 ms statement: CREATE DATABASE exodus3b WITH ENCODING='UTF8' TEMPLATE exodus2b
2022-10-31 14:10:52.932 UTC [1093962] exodus#exodus LOG: duration: 740.990 ms statement: DROP DATABASE exodus2b
2022-10-31 14:10:55.849 UTC [1093962] exodus#exodus LOG: duration: 2129.943 ms statement: DROP DATABASE exodus3b
2022-10-31 14:11:13.755 UTC [1102944] exodus#exodus LOG: duration: 17885.511 ms statement: CREATE DATABASE exodus2b WITH ENCODING='UTF8'
2022-10-31 14:11:43.537 UTC [1102944] exodus#exodus LOG: duration: 29769.648 ms statement: CREATE DATABASE exodus3b WITH ENCODING='UTF8'
2022-10-31 14:21:33.410 UTC [1247048] exodus#exodus LOG: duration: 15115.960 ms statement: CREATE DATABASE xo_dict WITH ENCODING='UTF8'
Related
Postgres is restarting continuously on using shared_preload_libraries extension.
https://postgresqlco.nf/doc/en/param/shared_preload_libraries/
I am running postgres-15.1 using a python-based daemon in CentOS7-32bit arch. It is working fine if we do not use "shared_preload_libraries" extension. But after enabling this extension using "ALTER SYSTEM SET shared_preload_libraries" command, the postgres is restarting every few seconds.
Initially it was working fine with postgres-9.6.4.
Postgres logs:
waiting for server to start....2023-02-15 07:13:45.676 GMT [28605] LOG: skipping missing configuration file "/home/runtime/pgsql/data/postgresql.auto.conf"
2023-02-15 07:13:45.825 GMT [28605] LOG: starting PostgreSQL 15.1 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 32-bit
2023-02-15 07:13:45.825 GMT [28605] LOG: listening on IPv4 address "127.0.0.1", port 5432
2023-02-15 07:13:45.933 GMT [28605] LOG: listening on Unix socket "/home/runtime/pgsql/.s.PGSQL.5432"
2023-02-15 07:13:45.969 GMT [28608] LOG: database system was shut down at 2023-02-15 07:13:35 GMT
2023-02-15 07:13:45.989 GMT [28605] LOG: database system is ready to accept connections
done
server started
ALTER SYSTEM
ALTER SYSTEM
ALTER SYSTEM
ALTER SYSTEM
2023-02-15 07:13:51.480 GMT [28605] LOG: received fast shutdown request
waiting for server to shut down....2023-02-15 07:13:51.512 GMT [28605] LOG: aborting any active transactions
2023-02-15 07:13:51.513 GMT [28605] LOG: background worker "logical replication launcher" (PID 28611) exited with exit code 1
2023-02-15 07:13:51.513 GMT [28606] LOG: shutting down
2023-02-15 07:13:51.536 GMT [28606] LOG: checkpoint starting: shutdown immediate
2023-02-15 07:13:51.908 GMT [28606] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.090 s, sync=0.028 s, total=0.395 s; sync files=2, longest=0.021 s, average=0.014 s; distance=0 kB, estimate=0 kB
2023-02-15 07:13:51.909 GMT [28605] LOG: database system is shut down
done
server stopped
I tried to use postgres-15.0 and postgres-14.4, got the same behavior with both. I am not able to find any open issues w.r.t. shared_preload_libraries extension with new versions of Postgres.
PS: I have built this Postgres from the source code with openssl-1.1.1i.
I am using "citus" library with this.
ALTER SYSTEM SET shared_preload_libraries="citus";
I have generated a new citus.so file from it's source code using postgres-15.1. github.com/citusdata/citus
I have elk(elasticsearch, logstash , kibana) for monitor logs. postgresql logs with filebeat send to logstash and show logs in kibana.
I enabled log_duration in postgresql.conf and logged correctly.
for example, the postgresql log is below :
2023-01-11 06:17:09.754 EST [19751] user#books LOG: duration: 0.014 ms execute <unnamed>: SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY
2023-01-11 06:17:09.755 EST [19751] user#books LOG: duration: 0.016 ms bind S_1: BEGIN
2023-01-11 06:17:09.755 EST [19751] user#books LOG: duration: 0.006 ms execute S_1: BEGIN
2023-01-11 06:17:09.756 EST [19751] user#books LOG: duration: 0.488 ms parse <unnamed>: select * from books
but in kibana, The duration value of the field is not separate and it is displayed as one with the message field, and there is no possibility of aggregation and other opetaions.
how to extract and split duration value from message ???
I don't understand this postgresql log :
2022-03-27 08:00:19.441 UTC [584262] postgres#boutique2 FATAL: database "boutique2" does not exist
2022-03-27 08:00:19.704 UTC [584264] postgres#boutique2 FATAL: database "boutique2" does not exist
2022-03-27 08:01:54.770 UTC [781] LOG: received fast shutdown request
2022-03-27 08:01:54.773 UTC [781] LOG: aborting any active transactions
2022-03-27 08:01:54.779 UTC [781] LOG: background worker "logical replication launcher" (PID 800) exited with exit code 1
2022-03-27 08:01:54.780 UTC [795] LOG: shutting down
2022-03-27 08:01:54.797 UTC [781] LOG: database system is shut down
2022-03-27 08:02:16.254 UTC [770] LOG: starting PostgreSQL 13.5 (Debian 13.5-0+deb11u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-27 08:02:16.255 UTC [770] LOG: listening on IPv4 address "127.0.0.1", port 5432
2022-03-27 08:02:16.256 UTC [770] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-27 08:02:16.271 UTC [772] LOG: database system was shut down at 2022-03-27 08:01:54 UTC
2022-03-27 08:02:16.285 UTC [770] LOG: database system is ready to accept connections
2022-03-27 08:02:17.243 UTC [891] postgres#boutique2 FATAL: database "boutique2" does not exist
2022-03-27 08:02:17.640 UTC [1044] postgres#boutique2 FATAL: database "boutique2" does not exist
I dropped this database, which is not present in my mojolicious scripts, the only ones on this server.
root#perso:/etc/postgresql/13/main# grep postgresql.conf -e 'boutique2'
root#perso:/etc/postgresql/13/main# grep pg_hba.conf -e 'boutique2'
root#perso:/etc/postgresql/13/main#
Has please someone an idea about ?
In the Datastore logs, I encountered the following error, Not sure what has gone wrong.
[7804] LOG: starting PostgreSQL 13.1, compiled by Visual C++ build 1914, 64-bit
2021-08-23 22:56:15.980 CEST [7804] LOG: listening on IPv4 address "127.0.0.1", port 9003
2021-08-23 22:56:15.983 CEST [7804] LOG: listening on IPv4 address "10.91.198.36", port 9003
2021-08-23 22:56:16.041 CEST [8812] LOG: database system was shut down at 2021-08-23 22:54:51 CEST
2021-08-23 22:56:16.044 CEST [8812] LOG: invalid primary checkpoint record
2021-08-23 22:56:16.045 CEST [8812] PANIC: could not locate a valid checkpoint record
2021-08-23 22:56:16.076 CEST [7804] LOG: startup process (PID 8812) was terminated by exception 0xC0000409
2021-08-23 22:56:16.076 CEST [7804] HINT: See C include file "ntstatus.h" for a description of the hexadecimal value.
2021-08-23 22:56:16.078 CEST [7804] LOG: aborting startup due to startup process failure
2021-08-23 22:56:16.094 CEST [7804] LOG: database system is shut down
Somebody deleted crucial WAL files (to free space?), and now your cluster is corrupted
Restore from backup. If you have no backup, running pg_resetwal is an option, since it seems there was a clean shutdown.
I am trying to restore a PostgreSQL database to a point in time.
When I am using only restore_command in recovery.conf then its working fine.
restore_command = 'cp /var/lib/pgsql/pg_log_archive/%f %p'
When I am using the recovery_target_time parameter, it is not restoring to the target time.
restore_command = 'cp /var/lib/pgsql/pg_log_archive/%f %p'
recovery_target_time='2018-06-05 06:43:00.0'
Below is the log file content:
2018-06-05 07:31:39.166 UTC [22512] LOG: database system was interrupted; last known up at 2018-06-05 06:35:52 UTC
2018-06-05 07:31:39.664 UTC [22512] LOG: starting point-in-time recovery to 2018-06-05 06:43:00+00
2018-06-05 07:31:39.671 UTC [22512] LOG: restored log file "00000005.history" from archive
2018-06-05 07:31:39.769 UTC [22512] LOG: restored log file "00000005000000020000008F" from archive
2018-06-05 07:31:39.816 UTC [22512] LOG: redo starts at 2/8F000028
2018-06-05 07:31:39.817 UTC [22512] LOG: consistent recovery state reached at 2/8F000130
2018-06-05 07:31:39.818 UTC [22510] LOG: database system is ready to accept read only connections
2018-06-05 07:31:39.912 UTC [22512] LOG: restored log file "000000050000000200000090" from archive
2018-06-05 07:31:39.996 UTC [22512] LOG: recovery stopping before abort of transaction 9525, time 2018-06-05 06:45:02.088502+00
2018-06-05 07:31:39.996 UTC [22512] LOG: recovery has paused
I am trying to restore the database instance to 06:43:00. Why is it recovering up to 06:45:02?
EDIT
In first scenario recovery.conf converted into recovery.done but this didn't happen in second scenario
What could be the reason of this?
You forgot to set
recovery_target_action = 'promote'
After point-in-time-recovery, recovery_target_action determines how PostgreSQL will proceed.
The default value is pause which means that PostgreSQL will do nothing and wait for you to tell it how to proceed.
To complete recovery, connect to the database and run
SELECT pg_wal_replay_resume();
It seems that there has been no database activity logged between 06:43:00 and 06:45:02. Observe that the log says recovery stopping before abort of transaction 9525.