How to prevent pg wals removing when starts after hard shutdown? - postgresql

# select version();
> PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 8.3.0-3ubuntu1) 8.3.0, 64-bit
I had broken archive_command for a while. Pg kept WALs until they were not archived as expected. Then I killed pg process and started it. And i noticed that all WALs, which were ready to be archived, were deleted. There were not the WALs in the google bucket either.
pg logs after restart:
2020-04-22 14:27:23.702 UTC [7] LOG: database system was interrupted; last known up at 2020-04-22 14:27:08 UTC
2020-04-22 14:27:24.819 UTC [7] LOG: database system was not properly shut down; automatic recovery in progress
2020-04-22 14:27:24.848 UTC [7] LOG: redo starts at 4D/BCEF6BA8
2020-04-22 14:27:24.848 UTC [7] LOG: invalid record length at 4D/BCEFF0C0: wanted 24, got 0
2020-04-22 14:27:24.848 UTC [7] LOG: redo done at 4D/BCEFF050
2020-04-22 14:27:25.286 UTC [1] LOG: database system is ready to accept connections
I repeated the scenario with conf param log_min_messages=DEBUG5 and I had seen that pg removed old WALs ignoring the fact that they were waited for archiving.
2020-04-23 14:55:42.819 UTC [6] LOG: redo starts at 0/22000098
2020-04-23 14:55:50.138 UTC [6] LOG: redo done at 0/22193FB0
2020-04-23 14:55:50.138 UTC [6] DEBUG: resetting unlogged relations: cleanup 0 init 1
2020-04-23 14:55:50.266 UTC [6] DEBUG: performing replication slot checkpoint
2020-04-23 14:55:50.336 UTC [6] DEBUG: attempting to remove WAL segments older than log file 000000000000000000000021
2020-04-23 14:55:50.349 UTC [6] DEBUG: recycled write-ahead log file "000000010000000000000015"
2020-04-23 14:55:50.365 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000012"
2020-04-23 14:55:50.372 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001B"
2020-04-23 14:55:50.382 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001E"
2020-04-23 14:55:50.390 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000013"
2020-04-23 14:55:50.402 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000014"
2020-04-23 14:55:50.412 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001D"
2020-04-23 14:55:50.424 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001C"
2020-04-23 14:55:50.433 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000000F"
2020-04-23 14:55:50.442 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001F"
2020-04-23 14:55:50.455 UTC [6] DEBUG: removing write-ahead log file "00000001000000000000001A"
2020-04-23 14:55:50.471 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000020"
2020-04-23 14:55:50.480 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000018"
2020-04-23 14:55:50.489 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000011"
2020-04-23 14:55:50.502 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000016"
2020-04-23 14:55:50.518 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000017"
2020-04-23 14:55:50.529 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000010"
2020-04-23 14:55:50.536 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000019"
2020-04-23 14:55:50.547 UTC [6] DEBUG: removing write-ahead log file "000000010000000000000021"
2020-04-23 14:55:50.559 UTC [6] DEBUG: MultiXactId wrap limit is 2147483648, limited by database with OID 1
2020-04-23 14:55:50.559 UTC [6] DEBUG: MultiXact member stop limit is now 4294914944 based on MultiXact 1
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(0): 1 before_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(0): 4 on_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: proc_exit(0): 2 callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: exit(0)
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
2020-04-23 14:55:50.566 UTC [6] DEBUG: proc_exit(-1): 0 callbacks to make
2020-04-23 14:55:50.571 UTC [1] DEBUG: reaping dead processes
2020-04-23 14:55:50.572 UTC [10] DEBUG: autovacuum launcher started
2020-04-23 14:55:50.572 UTC [1] DEBUG: starting background worker process "logical replication launcher"
2020-04-23 14:55:50.572 UTC [10] DEBUG: InitPostgres
2020-04-23 14:55:50.572 UTC [10] DEBUG: my backend ID is 1
Is there any way to prevent pg from removing not archived WALs?

It looks like that this is: "[BUG] non archived WAL removed during production crash recovery" found the last weeks:
https://www.postgresql.org/message-id/20200331172229.40ee00dc%40firost
A patch is currently under development according to the PG mailing list discussion but not yet available. It could be available at the earliest in May.

Related

PostgreSQL server process was terminated by signal 11

I'm using PostgreSQL v11.0. I am executing a simple SQL query
delete from base.sys_attribute where id=20;
It fails and returns the error:
server process (PID 29) was terminated by signal 11 (see log below)
Any idea how I can troubleshoot this issue? I'm completely stuck...
listof-db | 2019-05-10 08:54:46.425 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
listof-db | 2019-05-10 08:54:46.425 UTC [1] LOG: listening on IPv6 address "::", port 5432
listof-db | 2019-05-10 08:54:46.425 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
listof-db | 2019-05-10 08:54:46.435 UTC [20] LOG: database system was shut down at 2019-05-10 08:54:26 UTC
listof-db | 2019-05-10 08:54:46.438 UTC [1] LOG: database system is ready to accept connections
listof-db | 2019-05-10 08:56:52.295 UTC [1] LOG: server process (PID 29) was terminated by signal 11
listof-db | 2019-05-10 08:56:52.295 UTC [1] DETAIL: Failed process was running: delete from base.sys_attribute where id=20
listof-db | 2019-05-10 08:56:52.295 UTC [1] LOG: terminating any other active server processes
listof-db | 2019-05-10 08:56:52.295 UTC [24] WARNING: terminating connection because of crash of another server process
listof-db | 2019-05-10 08:56:52.295 UTC [24] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
listof-db | 2019-05-10 08:56:52.295 UTC [24] HINT: In a moment you should be able to reconnect to the database and repeat your command.
listof-db | 2019-05-10 08:56:52.486 UTC [1] LOG: all server processes terminated; reinitializing
listof-db | 2019-05-10 08:56:53.093 UTC [30] LOG: database system was interrupted; last known up at 2019-05-10 08:54:46 UTC
listof-db | 2019-05-10 08:56:53.187 UTC [30] LOG: database system was not properly shut down; automatic recovery in progress
listof-db | 2019-05-10 08:56:53.189 UTC [30] LOG: redo starts at 0/75955A8
listof-db | 2019-05-10 08:56:53.189 UTC [30] LOG: invalid record length at 0/75971B8: wanted 24, got 0
listof-db | 2019-05-10 08:56:53.189 UTC [30] LOG: redo done at 0/7597180
listof-db | 2019-05-10 08:56:53.194 UTC [1] LOG: database system is ready to accept connections

PostgreSQL is not starting after pg_rewind

I am using pg_rewind as follows on the slave:
/usr/pgsql-11/bin/pg_rewind -D <data_dir_path> --source-server="port=5432 user=myuser host=<ip>"
Command is completing successfully with:
source and target cluster are on the same timeline
no rewind required
After that, I created recovery.conf on the new slave as follows:
standby_mode = 'on'
primary_conninfo = 'host=<master_ip> port=5432 user=<uname> password=<password> sslmode=require sslcompression=0'
trigger_file = '/tmp/MasterNow'
After that I start PostgreSQL on the slave and check the status. I get the following messages:
]# systemctl status postgresql-11
● postgresql-11.service - PostgreSQL 11 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-11.service; enabled; vendor preset: disabled)
Active: activating (start) since Thu 2019-05-02 10:36:11 UTC; 33min ago
Docs: https://www.postgresql.org/docs/11/static/
Process: 26444 ExecStartPre=/usr/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 26450 (postmaster)
CGroup: /system.slice/postgresql-11.service
├─26450 /usr/pgsql-11/bin/postmaster -D /var/lib/pgsql/11/data/
└─26458 postgres: startup recovering 000000060000000000000008
May 02 11:09:13 my.localhost postmaster[26450]: 2019-05-02 11:09:13 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:18 my.localhost postmaster[26450]: 2019-05-02 11:09:18 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:23 my.localhost postmaster[26450]: 2019-05-02 11:09:23 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:28 my.localhost postmaster[26450]: 2019-05-02 11:09:28 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:33 my.localhost postmaster[26450]: 2019-05-02 11:09:33 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:38 my.localhost postmaster[26450]: 2019-05-02 11:09:38 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:43 my.localhost postmaster[26450]: 2019-05-02 11:09:43 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:48 my.localhost postmaster[26450]: 2019-05-02 11:09:48 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:53 my.localhost postmaster[26450]: 2019-05-02 11:09:53 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:58 my.localhost postmaster[26450]: 2019-05-02 11:09:58 UTC LOG: record length 1485139969 at 0/8005CB0 too long
Hint: Some lines were ellipsized, use -l to show in full.
On the master, the pg_wal directory looks as follows:
root#{/var/lib/pgsql/11/data/pg_wal}#ls -R
.:
000000010000000000000003 000000020000000000000006 000000040000000000000006 000000050000000000000008 archive_status
000000010000000000000004 00000002.history 00000004.history 00000005.history
000000020000000000000004 000000030000000000000006 000000050000000000000006 000000060000000000000008
000000020000000000000005 00000003.history 000000050000000000000007 00000006.history
./archive_status:
000000050000000000000006.done 000000050000000000000007.done
PostgreSQL logs from slave:
May 3 06:08:58 postgres[9226]: [39-1] 2019-05-03 06:08:58 UTC LOG: entering standby mode
May 3 06:08:58 postgres[9226]: [40-1] 2019-05-03 06:08:58 UTC LOG: invalid resource manager ID 80 at 0/8005C78
May 3 06:08:58 postgres[9226]: [41-1] 2019-05-03 06:08:58 UTC DEBUG: switched WAL source from archive to stream after failure
May 3 06:08:58 postgres[9227]: [35-1] 2019-05-03 06:08:58 UTC DEBUG: find_in_dynamic_libpath: trying "/usr/pgsql-11/lib/libpqwalreceiver"
May 3 06:08:58 postgres[9227]: [36-1] 2019-05-03 06:08:58 UTC DEBUG: find_in_dynamic_libpath: trying "/usr/pgsql-11/lib/libpqwalreceiver.so"
May 3 06:08:58 postgres[9227]: [37-1] 2019-05-03 06:08:58 UTC LOG: started streaming WAL from primary at 0/8000000 on timeline 6
May 3 06:08:58 postgres[9227]: [38-1] 2019-05-03 06:08:58 UTC DEBUG: sendtime 2019-05-03 06:08:58.348488+00 receipttime 2019-05-03 06:08:58.350018+00 replication apply delay (N/A) transfer latency 1 ms
May 3 06:08:58 postgres[9227]: [39-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8020000 flush 0/0 apply 0/0
May 3 06:08:58 postgres[9227]: [40-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8020000 flush 0/8020000 apply 0/0
May 3 06:08:58 postgres[9226]: [42-1] 2019-05-03 06:08:58 UTC LOG: invalid resource manager ID 80 at 0/8005C78
May 3 06:08:58 postgres[9227]: [41-1] 2019-05-03 06:08:58 UTC DEBUG: sendtime 2019-05-03 06:08:58.349865+00 receipttime 2019-05-03 06:08:58.35253+00 replication apply delay 0 ms transfer latency 2 ms
May 3 06:08:58 postgres[9227]: [42-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8020000 apply 0/0
May 3 06:08:58 postgres[9227]: [43-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8040000 apply 0/0
May 3 06:08:58 postgres[9227]: [44-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8040000 apply 0/0
May 3 06:08:58 postgres[9227]: [45-1] 2019-05-03 06:08:58 UTC FATAL: terminating walreceiver process due to administrator command
May 3 06:08:58 postgres[9227]: [46-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(1): 1 before_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [47-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(1): 5 on_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [48-1] 2019-05-03 06:08:58 UTC DEBUG: proc_exit(1): 2 callbacks to make
May 3 06:08:58 postgres[9227]: [49-1] 2019-05-03 06:08:58 UTC DEBUG: exit(1)
May 3 06:08:58 postgres[9227]: [50-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [51-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [52-1] 2019-05-03 06:08:58 UTC DEBUG: proc_exit(-1): 0 callbacks to make
May 3 06:08:58 postgres[9218]: [35-1] 2019-05-03 06:08:58 UTC DEBUG: reaping dead processes
I'd say that the standby is trying to recover along the wrong time line (5, I guess), does not follow the new primary to the latest time line and keeps hitting an invalid record in the WAL file.
I'd add the following to recovery.conf:
recovery_target_timeline = 'latest'
For a more detailed analysis, look into the PostgreSQL log file.

Has anyone migrated Stellar's Docker Compose to Kubernetes and fixed the issue with Stellar Horizon DB?

I may be encountering the same issue described in Horizon: does not exit if database connection fails #898 (https://github.com/stellar/go/issues/898) but with a different set up scenario.
I am in the process of migrating https://github.com/satoshipay/docker-stellar-horizon Docker Compose definitions to Kubernetes. I have been able to migrate most of the set up but hitting a problem with Horizon where the DB is not getting created during startup. I believe I have stellar core with the dependency on Postgres working as designed and the DB created as part of startup but the set up is different for Horizon.
The current issue I am hitting is the following...
Horizon Server Pod Logs
todkapmcbookpro:kubernetes todd$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-horizon-564d479db4-2xvqd 1/1 Running 0 20m
postgres-sc-9f5f7fb4-prlpr 1/1 Running 0 22m
stellar-core-7ff77b4db8-tx4mt 1/1 Running 0 18m
stellar-horizon-6cff98554b-d7djn 0/1 CrashLoopBackOff 8 18m
todkapmcbookpro:kubernetes todd$ kubectl logs stellar-horizon-6cff98554b-d7djn
Initializing Horizon database...
2019/05/02 12:58:09 connect failed: pq: database "stellar-horizon" does not exist
Horizon database initialization failed (possibly because it has been done before)
2019/05/02 12:58:09 pq: database "stellar-horizon" does not exist
todkapmcbookpro:kubernetes todd$
Horizon Postgres DB pod logs
todkapmcbookpro:kubernetes todd$ kubectl logs postgres-horizon-564d479db4-2xvqd
2019-05-02 12:40:06.424 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2019-05-02 12:40:06.424 UTC [1] LOG: listening on IPv6 address "::", port 5432
2019-05-02 12:40:06.437 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2019-05-02 12:40:06.444 UTC [23] LOG: database system was interrupted; last known up at 2019-05-02 12:38:19 UTC
2019-05-02 12:40:06.453 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress
2019-05-02 12:40:06.454 UTC [23] LOG: redo starts at 0/1636FB8
2019-05-02 12:40:06.454 UTC [23] LOG: invalid record length at 0/1636FF0: wanted 24, got 0
2019-05-02 12:40:06.454 UTC [23] LOG: redo done at 0/1636FB8
2019-05-02 12:40:06.459 UTC [1] LOG: database system is ready to accept connections
2019-05-02 12:42:35.675 UTC [30] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:42:35.690 UTC [31] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:42:37.123 UTC [32] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:42:37.136 UTC [33] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:42:50.131 UTC [34] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:42:50.153 UTC [35] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:43:16.094 UTC [36] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:43:16.115 UTC [37] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:43:57.097 UTC [38] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:43:57.111 UTC [39] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:45:21.050 UTC [40] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:45:21.069 UTC [41] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:48:05.122 UTC [42] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:48:05.145 UTC [43] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:53:07.077 UTC [44] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:53:07.099 UTC [45] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:58:09.084 UTC [46] FATAL: database "stellar-horizon" does not exist
2019-05-02 12:58:09.098 UTC [47] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:03:18.055 UTC [48] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:03:18.071 UTC [49] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:08:28.057 UTC [50] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:08:28.078 UTC [51] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:13:42.071 UTC [52] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:13:42.097 UTC [53] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:18:55.128 UTC [54] FATAL: database "stellar-horizon" does not exist
2019-05-02 13:18:55.152 UTC [55] FATAL: database "stellar-horizon" does not exist
It would be ideal if the setup for Horizon and Core were the same (especially as it relates to the DB configuration env properties). I think I have the settings correct but may be missing something subtle.
I have a branch of this WIP where the failure occurs. I have included a quick set up script as well as a minikube set up in this branch.
https://github.com/todkap/stellar-testnet/tree/k8-deploy/kubernetes
We were able to resolve and published an article demonstrating the end to end flow.
https://itnext.io/how-to-deploy-a-stellar-validator-on-kubernetes-with-helm-a111e5dfe437

Postgresql wont start

Postgresql wont start in ubuntu 16.04,
Please give support.
These are the error log read:
2018-10-12 13:42:34 UTC [3371-2] LOG: received fast shutdown request
2018-10-12 13:42:34 UTC [3371-3] LOG: aborting any active transactions
2018-10-12 13:42:34 UTC [3376-2] LOG: autovacuum launcher shutting down
2018-10-12 13:42:34 UTC [3373-1] LOG: shutting down
2018-10-12 13:42:34 UTC [3373-2] LOG: database system is shut down
2018-10-12 13:42:52 UTC [3855-1] LOG: database system was shut down at
2018-10-12 13:42:34 UTC
2018-10-12 13:42:52 UTC [3855-2] LOG: MultiXact member wraparound
protections are now enabled
2018-10-12 13:42:52 UTC [3853-1] LOG: database system is ready to accept
connections
2018-10-12 13:42:52 UTC [3859-1] LOG: autovacuum launcher started
2018-10-12 13:42:52 UTC [3861-1] [unknown]#[unknown] LOG: incomplete
startup
packet

Can't connect to postgresql server after moving database files

I want to move my postgresql databases to an external hard drive (HDD 2TB USB 3.0). I copied the whole directory:
/var/lib/postgresql/9.4/main/
to the external drive, preserving permissions, with a command (ran by the user postgres):
$ rsync -aHAX /var/lib/postgresql/9.4/main/* new_dir_path
First run of this command was interrupted, but in the second attempt I copied everything (basically one database of size 800 GB). In the file
/etc/postgresql/9.4/main/postgresql.conf
I changed the line
data_directory = '/var/lib/postgresql/9.4/main'
to point to the new location. I restarted the postgresql service, and when from the user postgres I run the command psql, I get:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I didn't change any other settings. There is no pidfile 'postmaster.pid' in the new location (or in the old one). When I run a command
$ /usr/lib/postgresql/9.4/bin/postgres --single -D /etc/postgresql/9.4/main -P -d 1
I get
2017-03-16 20:47:39 CET [2314-1] DEBUG: mmap with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory
2017-03-16 20:47:39 CET [2314-2] NOTICE: database system was shut down at 2017-03-16 20:01:23 CET
2017-03-16 20:47:39 CET [2314-3] DEBUG: checkpoint record is at 647/4041B3A0
2017-03-16 20:47:39 CET [2314-4] DEBUG: redo record is at 647/4041B3A0; shutdown TRUE
2017-03-16 20:47:39 CET [2314-5] DEBUG: next transaction ID: 1/414989450; next OID: 112553
2017-03-16 20:47:39 CET [2314-6] DEBUG: next MultiXactId: 485048384; next MultiXactOffset: 1214064579
2017-03-16 20:47:39 CET [2314-7] DEBUG: oldest unfrozen transaction ID: 259446705, in database 12141
2017-03-16 20:47:39 CET [2314-8] DEBUG: oldest MultiXactId: 476142442, in database 12141
2017-03-16 20:47:39 CET [2314-9] DEBUG: transaction ID wrap limit is 2406930352, limited by database with OID 12141
2017-03-16 20:47:39 CET [2314-10] DEBUG: MultiXactId wrap limit is 2623626089, limited by database with OID 12141
2017-03-16 20:47:39 CET [2314-11] DEBUG: starting up replication slots
2017-03-16 20:47:39 CET [2314-12] DEBUG: oldest MultiXactId member is at offset 1191132700
2017-03-16 20:47:39 CET [2314-13] DEBUG: MultiXact member stop limit is now 1191060352 based on MultiXact 476142442
PostgreSQL stand-alone backend 9.4.9
backend>
but I don't now how to understand this output. When I revert the changes in the postgresql.conf file, everything works fine. Interestingly, few months ago I moved the database in the same way, but to the local directory, and it worked.
I use postgresql-9.4 and debian-jessie.
Thanks for your help!
UPDATE
Content of the log file:
$ cat /var/log/postgresql/postgresql-9.4-main.log
2017-03-14 17:07:16 CET [13822-2] LOG: received fast shutdown request
2017-03-14 17:07:16 CET [13822-3] LOG: aborting any active transactions
2017-03-14 17:07:16 CET [13827-3] LOG: autovacuum launcher shutting down
2017-03-14 17:07:16 CET [13824-1] LOG: shutting down
2017-03-14 17:07:16 CET [13824-2] LOG: database system is shut down