I am using pg_rewind as follows on the slave:
/usr/pgsql-11/bin/pg_rewind -D <data_dir_path> --source-server="port=5432 user=myuser host=<ip>"
Command is completing successfully with:
source and target cluster are on the same timeline
no rewind required
After that, I created recovery.conf on the new slave as follows:
standby_mode = 'on'
primary_conninfo = 'host=<master_ip> port=5432 user=<uname> password=<password> sslmode=require sslcompression=0'
trigger_file = '/tmp/MasterNow'
After that I start PostgreSQL on the slave and check the status. I get the following messages:
]# systemctl status postgresql-11
● postgresql-11.service - PostgreSQL 11 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-11.service; enabled; vendor preset: disabled)
Active: activating (start) since Thu 2019-05-02 10:36:11 UTC; 33min ago
Docs: https://www.postgresql.org/docs/11/static/
Process: 26444 ExecStartPre=/usr/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 26450 (postmaster)
CGroup: /system.slice/postgresql-11.service
├─26450 /usr/pgsql-11/bin/postmaster -D /var/lib/pgsql/11/data/
└─26458 postgres: startup recovering 000000060000000000000008
May 02 11:09:13 my.localhost postmaster[26450]: 2019-05-02 11:09:13 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:18 my.localhost postmaster[26450]: 2019-05-02 11:09:18 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:23 my.localhost postmaster[26450]: 2019-05-02 11:09:23 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:28 my.localhost postmaster[26450]: 2019-05-02 11:09:28 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:33 my.localhost postmaster[26450]: 2019-05-02 11:09:33 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:38 my.localhost postmaster[26450]: 2019-05-02 11:09:38 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:43 my.localhost postmaster[26450]: 2019-05-02 11:09:43 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:48 my.localhost postmaster[26450]: 2019-05-02 11:09:48 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:53 my.localhost postmaster[26450]: 2019-05-02 11:09:53 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:58 my.localhost postmaster[26450]: 2019-05-02 11:09:58 UTC LOG: record length 1485139969 at 0/8005CB0 too long
Hint: Some lines were ellipsized, use -l to show in full.
On the master, the pg_wal directory looks as follows:
root#{/var/lib/pgsql/11/data/pg_wal}#ls -R
.:
000000010000000000000003 000000020000000000000006 000000040000000000000006 000000050000000000000008 archive_status
000000010000000000000004 00000002.history 00000004.history 00000005.history
000000020000000000000004 000000030000000000000006 000000050000000000000006 000000060000000000000008
000000020000000000000005 00000003.history 000000050000000000000007 00000006.history
./archive_status:
000000050000000000000006.done 000000050000000000000007.done
PostgreSQL logs from slave:
May 3 06:08:58 postgres[9226]: [39-1] 2019-05-03 06:08:58 UTC LOG: entering standby mode
May 3 06:08:58 postgres[9226]: [40-1] 2019-05-03 06:08:58 UTC LOG: invalid resource manager ID 80 at 0/8005C78
May 3 06:08:58 postgres[9226]: [41-1] 2019-05-03 06:08:58 UTC DEBUG: switched WAL source from archive to stream after failure
May 3 06:08:58 postgres[9227]: [35-1] 2019-05-03 06:08:58 UTC DEBUG: find_in_dynamic_libpath: trying "/usr/pgsql-11/lib/libpqwalreceiver"
May 3 06:08:58 postgres[9227]: [36-1] 2019-05-03 06:08:58 UTC DEBUG: find_in_dynamic_libpath: trying "/usr/pgsql-11/lib/libpqwalreceiver.so"
May 3 06:08:58 postgres[9227]: [37-1] 2019-05-03 06:08:58 UTC LOG: started streaming WAL from primary at 0/8000000 on timeline 6
May 3 06:08:58 postgres[9227]: [38-1] 2019-05-03 06:08:58 UTC DEBUG: sendtime 2019-05-03 06:08:58.348488+00 receipttime 2019-05-03 06:08:58.350018+00 replication apply delay (N/A) transfer latency 1 ms
May 3 06:08:58 postgres[9227]: [39-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8020000 flush 0/0 apply 0/0
May 3 06:08:58 postgres[9227]: [40-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8020000 flush 0/8020000 apply 0/0
May 3 06:08:58 postgres[9226]: [42-1] 2019-05-03 06:08:58 UTC LOG: invalid resource manager ID 80 at 0/8005C78
May 3 06:08:58 postgres[9227]: [41-1] 2019-05-03 06:08:58 UTC DEBUG: sendtime 2019-05-03 06:08:58.349865+00 receipttime 2019-05-03 06:08:58.35253+00 replication apply delay 0 ms transfer latency 2 ms
May 3 06:08:58 postgres[9227]: [42-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8020000 apply 0/0
May 3 06:08:58 postgres[9227]: [43-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8040000 apply 0/0
May 3 06:08:58 postgres[9227]: [44-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8040000 apply 0/0
May 3 06:08:58 postgres[9227]: [45-1] 2019-05-03 06:08:58 UTC FATAL: terminating walreceiver process due to administrator command
May 3 06:08:58 postgres[9227]: [46-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(1): 1 before_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [47-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(1): 5 on_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [48-1] 2019-05-03 06:08:58 UTC DEBUG: proc_exit(1): 2 callbacks to make
May 3 06:08:58 postgres[9227]: [49-1] 2019-05-03 06:08:58 UTC DEBUG: exit(1)
May 3 06:08:58 postgres[9227]: [50-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [51-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [52-1] 2019-05-03 06:08:58 UTC DEBUG: proc_exit(-1): 0 callbacks to make
May 3 06:08:58 postgres[9218]: [35-1] 2019-05-03 06:08:58 UTC DEBUG: reaping dead processes
I'd say that the standby is trying to recover along the wrong time line (5, I guess), does not follow the new primary to the latest time line and keeps hitting an invalid record in the WAL file.
I'd add the following to recovery.conf:
recovery_target_timeline = 'latest'
For a more detailed analysis, look into the PostgreSQL log file.
Related
I've installed, correctly Postgres13 with BDR. First node is configured correctly
create_node
-------------
661510928
(1 row)
create_node_group
-------------------
3209631483
(1 row)
wait_for_join_completion
--------------------------
ACTIVE
(1 row)
The problem is on second node, if I try to join the node 1, with command:
bdr_init_physical -D /home/postgres/data -n bdr_node_rm1_02 --local-dsn="port=5432 dbname=lmw host=192.168.0.101 user=postgres password=PWD" -d "port=5432 dbname=lmw host=192.168.0.102 user=postgres password=PWD"
Starting bdr_init_physical ...
Getting remote server identification ...
Creating replication slot on remote node ...
Creating base backup of the remote node ...
38798/38798 kB (100%), 1/1 tablespace
Creating temporary synchronization replication slot on remote node ...
Bringing local node to the target lsn ...
I see on log, that this command is in idle:
Feb 15 10:16:43 localhost postgres[8080]: [11-1] 2023-02-15 10:16:43.685 CET [8080] LOG: logical decoding found consistent point at 0/405EA10
Feb 15 10:16:43 localhost postgres[8080]: [11-2] 2023-02-15 10:16:43.685 CET [8080] DETAIL: There are no running transactions.
Feb 15 10:16:43 localhost postgres[8080]: [11-3] 2023-02-15 10:16:43.685 CET [8080] STATEMENT: SELECT pg_catalog.pg_create_logical_replication_slot('bdr_lmw_lmw_bdr_node_rm1_02', 'pglogical_output') -- bdr_init_physical
Feb 15 10:16:43 localhost postgres[8081]: [11-1] 2023-02-15 10:16:43.734 CET [8081] LOG: using default exclude directory 0xa0a220 0xa0a220
Feb 15 10:16:43 localhost postgres[8081]: [11-2] 2023-02-15 10:16:43.734 CET [8081] STATEMENT: BASE_BACKUP LABEL 'pg_basebackup base backup' PROGRESS FAST NOWAIT MANIFEST 'yes'
Feb 15 10:16:45 localhost postgres[8078]: [11-1] 2023-02-15 10:16:45.851 CET [8078] LOG: logical decoding found consistent point at 0/6000028
Feb 15 10:16:45 localhost postgres[8078]: [11-2] 2023-02-15 10:16:45.851 CET [8078] DETAIL: There are no running transactions.
Feb 15 10:16:45 localhost postgres[8078]: [11-3] 2023-02-15 10:16:45.851 CET [8078] STATEMENT: CREATE_REPLICATION_SLOT "bdr_lmw_lmw_bdr_node_rm1_02_tmp" TEMPORARY LOGICAL pglogical_output
Feb 15 10:16:45 localhost postgres[8078]: [12-1] 2023-02-15 10:16:45.851 CET [8078] LOG: exported logical decoding snapshot: "00000008-0000002C-1" with 0 transaction IDs
Feb 15 10:16:45 localhost postgres[8078]: [12-2] 2023-02-15 10:16:45.851 CET [8078] STATEMENT: CREATE_REPLICATION_SLOT "bdr_lmw_lmw_bdr_node_rm1_02_tmp" TEMPORARY LOGICAL pglogical_output
Do you know why this command is in idle?
# select version();
> PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 8.3.0-3ubuntu1) 8.3.0, 64-bit
I had broken archive_command for a while. Pg kept WALs until they were not archived as expected. Then I killed pg process and started it. And i noticed that all WALs, which were ready to be archived, were deleted. There were not the WALs in the google bucket either.
pg logs after restart:
2020-04-22 14:27:23.702 UTC [7] LOG: database system was interrupted; last known up at 2020-04-22 14:27:08 UTC
2020-04-22 14:27:24.819 UTC [7] LOG: database system was not properly shut down; automatic recovery in progress
2020-04-22 14:27:24.848 UTC [7] LOG: redo starts at 4D/BCEF6BA8
2020-04-22 14:27:24.848 UTC [7] LOG: invalid record length at 4D/BCEFF0C0: wanted 24, got 0
2020-04-22 14:27:24.848 UTC [7] LOG: redo done at 4D/BCEFF050
2020-04-22 14:27:25.286 UTC [1] LOG: database system is ready to accept connections
I repeated the scenario with conf param log_min_messages=DEBUG5 and I had seen that pg removed old WALs ignoring the fact that they were waited for archiving.
2020-04-23 14:55:42.819 UTC [6] LOG: redo starts at 0/22000098
2020-04-23 14:55:50.138 UTC [6] LOG: redo done at 0/22193FB0
2020-04-23 14:55:50.138 UTC [6] DEBUG: resetting unlogged relations: cleanup 0 init 1
2020-04-23 14:55:50.266 UTC [6] DEBUG: performing replication slot checkpoint
2020-04-23 14:55:50.336 UTC [6] DEBUG: attempting to remove WAL segments older than log file 000000000000000000000021
2020-04-23 14:55:50.349 UTC [6] DEBUG: recycled write-ahead log file "000000010000000000000015"
2020-04-23 14:55:50.365 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000012"
2020-04-23 14:55:50.372 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001B"
2020-04-23 14:55:50.382 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001E"
2020-04-23 14:55:50.390 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000013"
2020-04-23 14:55:50.402 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000014"
2020-04-23 14:55:50.412 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001D"
2020-04-23 14:55:50.424 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001C"
2020-04-23 14:55:50.433 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000000F"
2020-04-23 14:55:50.442 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001F"
2020-04-23 14:55:50.455 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001A"
2020-04-23 14:55:50.471 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000020"
2020-04-23 14:55:50.480 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000018"
2020-04-23 14:55:50.489 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000011"
2020-04-23 14:55:50.502 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000016"
2020-04-23 14:55:50.518 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000017"
2020-04-23 14:55:50.529 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000010"
2020-04-23 14:55:50.536 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000019"
2020-04-23 14:55:50.547 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000021"
2020-04-23 14:55:50.559 UTC [6] DEBUG: MultiXactId wrap limit is 2147483648, limited by database with OID 1
2020-04-23 14:55:50.559 UTC [6] DEBUG: MultiXact member stop limit is now 4294914944 based on MultiXact 1
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(0): 4 on_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: proc_exit(0): 2 callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: exit(0)
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: proc_exit(-1): 0 callbacks to make
2020-04-23 14:55:50.571 UTC [1] DEBUG: reaping dead processes
2020-04-23 14:55:50.572 UTC [10] DEBUG: autovacuum launcher started
2020-04-23 14:55:50.572 UTC [1] DEBUG: starting background worker process "logical replication launcher"
2020-04-23 14:55:50.572 UTC [10] DEBUG: InitPostgres
2020-04-23 14:55:50.572 UTC [10] DEBUG: my backend ID is 1
Is there any way to prevent pg from removing not archived WALs?
It looks like that this is: "[BUG] non archived WAL removed during production crash recovery" found the last weeks:
https://www.postgresql.org/message-id/20200331172229.40ee00dc%40firost
A patch is currently under development according to the PG mailing list discussion but not yet available. It could be available at the earliest in May.
I have 3 node HA setup of PostgreSQL.
Primary node (192.168.50.3)
Secondary node (192.168.50.4)
Secondary node (192.168.50.5)
Both secondary node are pointing to primary and recovery.conf looks like below:
standby_mode = 'on'
primary_conninfo = 'host=192.168.50.3 port=5432 user=<uname> password=<pass_herr> sslmode=require sslcompression=0'
trigger_file = '/tmp/MasterNow'
recovery_target_timeline = 'latest'
Now, when i restart the PostgreSQL on primary (50.3), i am getting following FATAL in PostgreSQL log.
Aug 7 07:58:55 cluster-node1 postgres[1671]: [1-1] 2019-08-07 07:58:55 UTC LOG: listening on IPv4 address "0.0.0.0", port 5432
Aug 7 07:58:55 cluster-node1 postgres[1671]: [2-1] 2019-08-07 07:58:55 UTC LOG: could not create IPv6 socket for address "::": Address family not supported by protocol
Aug 7 07:58:55 cluster-node1 postgres[1671]: [3-1] 2019-08-07 07:58:55 UTC LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Aug 7 07:58:55 cluster-node1 postgres[1671]: [4-1] 2019-08-07 07:58:55 UTC LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
Aug 7 07:58:55 cluster-node1 postgres[1671]: [5-1] 2019-08-07 07:58:55 UTC LOG: ending log output to stderr
Aug 7 07:58:55 cluster-node1 postgres[1671]: [5-2] 2019-08-07 07:58:55 UTC HINT: Future log output will go to log destination "syslog".
Aug 7 07:58:55 cluster-node1 postgres[1679]: [6-1] 2019-08-07 07:58:55 UTC LOG: database system was shut down at 2019-08-07 07:58:55 UTC
Aug 7 07:58:55 cluster-node1 postgres[1671]: [6-1] 2019-08-07 07:58:55 UTC LOG: database system is ready to accept connections
Aug 7 08:00:44 cluster-node1 postgres[1671]: [7-1] 2019-08-07 08:00:44 UTC LOG: received fast shutdown request
Aug 7 08:00:44 cluster-node1 postgres[1671]: [8-1] 2019-08-07 08:00:44 UTC LOG: aborting any active transactions
Aug 7 08:00:44 cluster-node1 postgres[1671]: [7-1] 2019-08-07 08:00:44 UTC LOG: received fast shutdown request
Aug 7 08:00:44 cluster-node1 postgres[1671]: [8-1] 2019-08-07 08:00:44 UTC LOG: aborting any active transactions
Aug 7 08:00:44 cluster-node1 postgres[1712]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1701]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1699]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1697]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1702]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1709]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1700]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1971]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1708]: [7-1] 2019-08-07 08:00:44 UTC FATAL: terminating connection due to administrator command
Aug 7 08:00:44 cluster-node1 postgres[1671]: [9-1] 2019-08-07 08:00:44 UTC LOG: background worker "logical replication launcher" (PID 1685) exited with exit code 1
Aug 7 08:00:44 cluster-node1 postgres[1680]: [6-1] 2019-08-07 08:00:44 UTC LOG: shutting down
Aug 7 08:00:45 cluster-node1 postgres[1982]: [10-1] 2019-08-07 08:00:45 UTC FATAL: the database system is shutting down
Aug 7 08:00:45 cluster-node1 postgres[1983]: [10-1] 2019-08-07 08:00:45 UTC FATAL: the database system is shutting down
Aug 7 08:00:48 cluster-node1 postgres[1984]: [10-1] 2019-08-07 08:00:48 UTC FATAL: the database system is shutting down
Setup details:
All nodes are CentOS 7.6
PostgreSQL version is 11.
Note:
When i stop the PostgreSQL on secondary nodes, then PostgreSQL is able to successfully restart on primary node.
I'm using PostgreSQL v11.0. I am executing a simple SQL query
delete from base.sys_attribute where id=20;
It fails and returns the error:
server process (PID 29) was terminated by signal 11 (see log below)
Any idea how I can troubleshoot this issue? I'm completely stuck...
listof-db | 2019-05-10 08:54:46.425 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
listof-db | 2019-05-10 08:54:46.425 UTC [1] LOG: listening on IPv6 address "::", port 5432
listof-db | 2019-05-10 08:54:46.425 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
listof-db | 2019-05-10 08:54:46.435 UTC [20] LOG: database system was shut down at 2019-05-10 08:54:26 UTC
listof-db | 2019-05-10 08:54:46.438 UTC [1] LOG: database system is ready to accept connections
listof-db | 2019-05-10 08:56:52.295 UTC [1] LOG: server process (PID 29) was terminated by signal 11
listof-db | 2019-05-10 08:56:52.295 UTC [1] DETAIL: Failed process was running: delete from base.sys_attribute where id=20
listof-db | 2019-05-10 08:56:52.295 UTC [1] LOG: terminating any other active server processes
listof-db | 2019-05-10 08:56:52.295 UTC [24] WARNING: terminating connection because of crash of another server process
listof-db | 2019-05-10 08:56:52.295 UTC [24] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
listof-db | 2019-05-10 08:56:52.295 UTC [24] HINT: In a moment you should be able to reconnect to the database and repeat your command.
listof-db | 2019-05-10 08:56:52.486 UTC [1] LOG: all server processes terminated; reinitializing
listof-db | 2019-05-10 08:56:53.093 UTC [30] LOG: database system was interrupted; last known up at 2019-05-10 08:54:46 UTC
listof-db | 2019-05-10 08:56:53.187 UTC [30] LOG: database system was not properly shut down; automatic recovery in progress
listof-db | 2019-05-10 08:56:53.189 UTC [30] LOG: redo starts at 0/75955A8
listof-db | 2019-05-10 08:56:53.189 UTC [30] LOG: invalid record length at 0/75971B8: wanted 24, got 0
listof-db | 2019-05-10 08:56:53.189 UTC [30] LOG: redo done at 0/7597180
listof-db | 2019-05-10 08:56:53.194 UTC [1] LOG: database system is ready to accept connections
Postgresql wont start in ubuntu 16.04,
Please give support.
These are the error log read:
2018-10-12 13:42:34 UTC [3371-2] LOG: received fast shutdown request
2018-10-12 13:42:34 UTC [3371-3] LOG: aborting any active transactions
2018-10-12 13:42:34 UTC [3376-2] LOG: autovacuum launcher shutting down
2018-10-12 13:42:34 UTC [3373-1] LOG: shutting down
2018-10-12 13:42:34 UTC [3373-2] LOG: database system is shut down
2018-10-12 13:42:52 UTC [3855-1] LOG: database system was shut down at
2018-10-12 13:42:34 UTC
2018-10-12 13:42:52 UTC [3855-2] LOG: MultiXact member wraparound
protections are now enabled
2018-10-12 13:42:52 UTC [3853-1] LOG: database system is ready to accept
connections
2018-10-12 13:42:52 UTC [3859-1] LOG: autovacuum launcher started
2018-10-12 13:42:52 UTC [3861-1] [unknown]#[unknown] LOG: incomplete
startup
packet