I've just stumbled upon this error while testing failover of a PostgreSQL 9.4 cluster I've set up. Here I'm trying to promote a slave to be the new master:
$ repmgr -f /etc/repmgr/repmgr.conf --verbose standby promote
2014-09-22 10:46:37 UTC LOG: database system shutdown was interrupted; last known up at 2014-09-22 10:44:02 UTC
2014-09-22 10:46:37 UTC LOG: database system was not properly shut down; automatic recovery in progress
2014-09-22 10:46:37 UTC LOG: redo starts at 0/18000028
2014-09-22 10:46:37 UTC LOG: consistent recovery state reached at 0/19000600
2014-09-22 10:46:37 UTC LOG: record with zero length at 0/1A000090
2014-09-22 10:46:37 UTC LOG: redo done at 0/1A000028
2014-09-22 10:46:37 UTC LOG: last completed transaction was at log time 2014-09-22 10:36:22.679806+00
2014-09-22 10:46:37 UTC FATAL: could not open directory "pg_logical/snapshots": No such file or directory
2014-09-22 10:46:37 UTC LOG: startup process (PID 2595) exited with exit code 1
2014-09-22 10:46:37 UTC LOG: aborting startup due to startup process failure
pg_logical/snapshots dir in fact exists on master node and it is empty.
UPD: I've just manually created empty directories pg_logical/snapshots and pg_logical/mappings and server has started without complaining. repmgr standby clone seems to omit this dirs while syncing. But the question still remains because I'm just curious what this directory is for, maybe I'm missing something in my setup. Simply Googling it did not yield any meaningful results.
It's for the new logical changeset extraction / logical replication feature in 9.4.
This shouldn't happen, though... it suggests a significant bug somewhere, probably repmgr. I'll wait for details (repmgr version, etc).
Update: Confirmed, it's a repmgr bug. It's fixed in git master already (and was before this report) and will be in the next release. Which had better be soon, given the significance of this issue.
Related
Is there any way we could stop replication without logging into psql shell.
Disk-full situation lead to some corruption in PG files and keep on restarting.
2023-02-06 08:17:54 UTC [1] LOG: starting PostgreSQL 13.7 (Ubuntu 13.7-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
2023-02-06 08:17:54 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-02-06 08:17:54 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-02-06 08:17:54 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-02-06 08:17:54 UTC [8] LOG: database system was shut down at 2023-02-06 08:17:45 UTC
2023-02-06 08:17:54 UTC [8] PANIC: could not open file "pg_replslot/slot_name/state": No such file or directory
2023-02-06 08:17:55 UTC [1] LOG: startup process (PID 8) was terminated by signal 6: Aborted
2023-02-06 08:17:55 UTC [1] LOG: aborting startup due to startup process failure
2023-02-06 08:17:55 UTC [1] LOG: database system is shut down
Tried removing pg_replslot/slot_name which lead to "password auth failure" and After resetting DB password(via pg_hba.conf) DB is not showing up !
Is there any proper way to recover in this state? /pg/main files and pgdata directories seem to be available except this slot information.
Done below steps:
I'm using PSQL docker container.
disk used for PG got full. Cleaned up some log files and docker system prune was used to remove unused images which freed some space. But lead to this issue.
Multiple times, we have seen similar issue in Dev environments, Disk full leading to some corrupted files (unable to read/ No such file or directory) kind of errors.
Tried removing pg_replslot/slot_name directory and it allowed me to start PSQL container.(previously is was keep on restarting container)
Reset password by using trust in auth column in pg_hbda.conf.
Now \l in psql shell showing only postgres DB and default DB's. Not showing our custom DB.
We have main DB in a separate tablespace and is not showing up in the list.
_ MOST importantly, Standby is also having SAME errors ! Probably someone messed it?
I had a docker container running timescaleDB. The database data was stored outside the container.
docker run -d --name timescale -v /<DATA>:/var/lib/postgresql/data timescale/timescaledb-postgis:latest-pg10
Something strange happened lately. I log in and see all the databases have suddenly vanished
I see the below in the log file
2021-03-13 11:32:00.215 UTC [21] LOG: database system was interrupted; last known up at 2021-03-11 16:16:19 UTC
2021-03-13 11:32:00.242 UTC [21] LOG: database system was not properly shut down; automatic recovery in progress
2021-03-13 11:32:00.243 UTC [21] LOG: redo starts at 0/15C1270
2021-03-13 11:32:00.243 UTC [21] LOG: invalid record length at 0/15C12A8: wanted 24, got 0
2021-03-13 11:32:00.243 UTC [21] LOG: redo done at 0/15C1270
2021-03-13 11:32:00.247 UTC [8] LOG: database system is ready to accept connections
2021-03-13 20:33:10.424 UTC [31] LOG: could not receive data from client: Operation timed out
2021-03-13 20:33:10.424 UTC [29] LOG: could not receive data from client: Operation timed out
Does that means that database has corrupted? If so is there a way to recover it somehow? The container has been running for 3 years without a problem and suddenly this unexpected loss of database.
Thanks
Yes, the database was corrupted, but it was recovering by the automated recovery process. It looked like the db system started working since it sent this message: database system is ready to accept connections. This means that the logfile recovery was done properly (which doesn't mean that the database files are fully consistent).
When the database is abruptly shutdown, there is small chance for filelvel corruption as well, but the good news is that I don't see anything in the log, after the recovery that can suggest that this is the case, however, you need to have backup of the files.
The next log message could not receive data from client: Operation timed out is not related to recovery, it's due to the client application which had terminated without properly closing the connection.
Check more information on corruptions and reasons in Postgresql wiki.
If you depend on the data in the database, always keep backup. Easiest way is to use pg_dumpall. This will dump the data in plain text format as a series of SQL statements and you will be able to import the data on later versions of PostgreSQL.
So my recommendation, before you do anything else with it, STOP THE CONTAINER AND TAKE BACKUP OF THE FILES. The recovery is trial and error process, and you will need to have the fresh copy of the files to try different thing. After you do this, export the data with pg_dumpall. If this passes, you can resume normal operations of the database.
I have tested different scenarios to do switchover and switchback in postgreSQL 9.4.1 Version.
Scenario 1:- PostgreSQL Switchover and Switchback in 9.4.1
Scenario 2:- Is it mandatory parameter recover_target_timeline='latest' in switchover and switchback in PostgreSQL 9.4.1?
Scenario 3:- On this page
To test scenario 3 I have followed below steps to perform.
1) Stop the application connected to primary server.
2) Confirm all application was stopped and all thread was disconnected from primary DB.
#192.x.x.129(Primary)
3) Clean shutdown primary using
pg_ctl -D$PGDATA stop --mf
#DR(192.x.x.128) side check sync status:
postgres=# select pg_last_xlog_receive_location(),pg_last_xlog_replay_location();
-[ RECORD 1 ]-----------------+-----------
pg_last_xlog_receive_location | 4/57000090
pg_last_xlog_replay_location | 4/57000090
4)Stop DR server.DR(192.x.x.128)
pg_ctl -D $PGDATA stop -mf
pg_log:
2019-12-02 13:16:09 IST LOG: received fast shutdown request
2019-12-02 13:16:09 IST LOG: aborting any active transactions
2019-12-02 13:16:09 IST LOG: shutting down
2019-12-02 13:16:09 IST LOG: database system is shut down
#192.x.x.128(DR)
5) Make following changes on DR server.
mv recovery.conf recovery.conf_bkp
6)make changes in 192.x.x.129(Primary):
[postgres#localhost data]$ cat recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=replication password=postgres host=192.x.x.128 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
restore_command = 'cp %p /home/postgres/restore/%f'
trigger_file='/tmp/promote'
7)Start DR as read write mode:
pg_ctl -D $DATA start
pg_log:
2019-12-02 13:20:21 IST LOG: database system was shut down in recovery at 2019-12-02 13:16:09 IST
2019-12-02 13:20:22 IST LOG: database system was not properly shut down; automatic recovery in progress
2019-12-02 13:20:22 IST LOG: consistent recovery state reached at 4/57000090
2019-12-02 13:20:22 IST LOG: invalid record length at 4/57000090
2019-12-02 13:20:22 IST LOG: redo is not required
2019-12-02 13:20:22 IST LOG: database system is ready to accept connections
2019-12-02 13:20:22 IST LOG: autovacuum launcher started
(END)
We can see in above log OLD primary is now DR of Primary(Which was OLD DR) and not showing any error because timeline id same on new primary which is already exit in new DR.
8)Start Primary as read only mode:-
pg_ctl -D$PGDATA start
logs:
2019-12-02 13:24:50 IST LOG: database system was shut down at 2019-12-02 11:14:50 IST
2019-12-02 13:24:51 IST LOG: entering standby mode
cp: cannot stat ‘pg_xlog/RECOVERYHISTORY’: No such file or directory
cp: cannot stat ‘pg_xlog/RECOVERYXLOG’: No such file or directory
2019-12-02 13:24:51 IST LOG: consistent recovery state reached at 4/57000090
2019-12-02 13:24:51 IST LOG: record with zero length at 4/57000090
2019-12-02 13:24:51 IST LOG: database system is ready to accept read only connections
2019-12-02 13:24:51 IST LOG: started streaming WAL from primary at 4/57000000 on timeline 9
2019-12-02 13:24:51 IST LOG: redo starts at 4/57000090
(END)
Question 1:- In This scenario i have perform only switch-over to show you. using this method we can do switch-over and switchback. but using below method Switch-over-switchback is work, then why PostgreSQL Community invented recovery_target_timeline=latest and apply patches see blog: https://www.enterprisedb.com/blog/switchover-switchback-in-postgresql-9-3 from PostgrSQL 9.3...to latest version.
Question 2:- What mean to say in above log cp: cannot stat ‘pg_xlog/RECOVERYHISTORY’: No such file or directory ?
Question 3:- I want to make sure from scenarios 1 and scenario 3 which method/Scenarios is correct way to do switchover and switchback? because scenario 2 is getting error because we must use recover_target_timeline=latest which all community experts know.
Answers:
If you shut down the standby cleanly, then remove recovery.conf and restart it, it will come up, but has to perform crash recovery (database system was not properly shut down).
The proper way to promote a standby to a primary is by using the trigger file or running pg_ctl promote (or, from v12 on, by running the SQL function pg_promote). Then you have no down time and don't need to perform crash recovery.
Promoting the standby will make it pick a new time line, so you need recovery_target_timeline = 'latest' if you want the new standby to follow that time line switch.
That is caused by your restore_command.
The method shown in 1. above is the correct one.
I am trying to restore a PostgreSQL database to a point in time.
When I am using only restore_command in recovery.conf then its working fine.
restore_command = 'cp /var/lib/pgsql/pg_log_archive/%f %p'
When I am using the recovery_target_time parameter, it is not restoring to the target time.
restore_command = 'cp /var/lib/pgsql/pg_log_archive/%f %p'
recovery_target_time='2018-06-05 06:43:00.0'
Below is the log file content:
2018-06-05 07:31:39.166 UTC [22512] LOG: database system was interrupted; last known up at 2018-06-05 06:35:52 UTC
2018-06-05 07:31:39.664 UTC [22512] LOG: starting point-in-time recovery to 2018-06-05 06:43:00+00
2018-06-05 07:31:39.671 UTC [22512] LOG: restored log file "00000005.history" from archive
2018-06-05 07:31:39.769 UTC [22512] LOG: restored log file "00000005000000020000008F" from archive
2018-06-05 07:31:39.816 UTC [22512] LOG: redo starts at 2/8F000028
2018-06-05 07:31:39.817 UTC [22512] LOG: consistent recovery state reached at 2/8F000130
2018-06-05 07:31:39.818 UTC [22510] LOG: database system is ready to accept read only connections
2018-06-05 07:31:39.912 UTC [22512] LOG: restored log file "000000050000000200000090" from archive
2018-06-05 07:31:39.996 UTC [22512] LOG: recovery stopping before abort of transaction 9525, time 2018-06-05 06:45:02.088502+00
2018-06-05 07:31:39.996 UTC [22512] LOG: recovery has paused
I am trying to restore the database instance to 06:43:00. Why is it recovering up to 06:45:02?
EDIT
In first scenario recovery.conf converted into recovery.done but this didn't happen in second scenario
What could be the reason of this?
You forgot to set
recovery_target_action = 'promote'
After point-in-time-recovery, recovery_target_action determines how PostgreSQL will proceed.
The default value is pause which means that PostgreSQL will do nothing and wait for you to tell it how to proceed.
To complete recovery, connect to the database and run
SELECT pg_wal_replay_resume();
It seems that there has been no database activity logged between 06:43:00 and 06:45:02. Observe that the log says recovery stopping before abort of transaction 9525.
i have an old copy of my postgresql db folder (/var/lib/postgresql/9.5/main/) from my server. Now I want to get the data out of the files. So i copied the main folder to my local machine and changed the postgresql config (/etc/postgresql/9.5/main/postgresql.conf) to point to that directory. Also i changed the permission of the main directory to the user postgres. After restarting the postgresql service (sudo service postgresql restart) it doesn't really work.
What I'm doing wrong? (Yea I know, pg_dump is the preferred way, but in this way...)
So my question, does this even work?
Or is there a other way to get the data out of this?
everything is done on ubuntu 16.04.
Edit:
the log file after changing the postgresql.conf file to point to the new directory.
2017-10-13 06:15:43 CEST [968-1] LOG: database system was shut down at 2017-10-13 00:21:04 CEST
2017-10-13 06:15:43 CEST [968-2] LOG: MultiXact member wraparound protections are now enabled
2017-10-13 06:15:43 CEST [959-1] LOG: database system is ready to accept connections
2017-10-13 06:15:43 CEST [975-1] LOG: autovacuum launcher started
2017-10-13 06:15:43 CEST [983-1] [unknown]#[unknown] LOG: incomplete startup packet
2017-10-13 06:47:55 CEST [975-2] LOG: autovacuum launcher shutting down
2017-10-13 06:47:55 CEST [959-2] LOG: received smart shutdown request
2017-10-13 06:47:55 CEST [972-1] LOG: shutting down
2017-10-13 06:47:55 CEST [972-2] LOG: database system is shut down
2017-10-13 06:47:55 CEST [4667-1] FATAL: database files are incompatible with server
2017-10-13 06:47:55 CEST [4667-2] DETAIL: The database cluster was initialized without USE_FLOAT8_BYVAL but the server was compiled with USE_FLOAT8_BYVAL.
2017-10-13 06:47:55 CEST [4667-3] HINT: It looks like you need to recompile or initdb.
Ok that pointed me to this. The server is a armv7l, whereas the local machine is x86_64 (uname -m). So there is no chance to get the data out of it?
thx, Luc
If it's really true that your data directory is from an ARM7l system, and your local system is x86_64, you're going to have some difficulties.
The immediate error about USE_FLOAT8_BYVAL is because ARM7L is 32-bit, and cannot pass 64-bit floating point values (8 byte) by-value. Your 64-bit host can. But if you recompiled a custom postgres with USE_FLOAT8_BYVAL disabled you'd likely just run into other issues.
I suggest installing PostgreSQL on a matching ARM system to recover the data. Data directories for PostgreSQL are not portable across architectures (for performance reasons).
If you do not have access to the ARM system anymore, an emulator like qemu should be able to help you.
Otherwise, maybe you can compile a modified PostgreSQL (probably starting with 32-bit x86) that can read the data-dir, with appropriate configure options etc. I've never needed to try this.