Making postgres backups from heroku database followers - postgresql

Previously you used to be able to create database dumps of database followers on heroku.
heroku pgbackups:capture HEROKU_FOLLOWER_COLOR --expire
It has stopped working recently.
If I heroku logs --tail --ps pgbackups I get
2013-03-07T17:27:49+00:00 app[pgbackups]: dump_progress: start
2013-03-07T17:27:49+00:00 app[pgbackups]: pg_dump-9.2.1-64bit: [archiver (db)] query failed: ERROR: cannot use serializable mode in a hot standby
2013-03-07T17:27:49+00:00 app[pgbackups]: HINT: You can use REPEATABLE READ instead.
2013-03-07T17:27:49+00:00 app[pgbackups]: pg_dump-9.2.1-64bit: [archiver (db)] query was: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY, DEFERRABLE
2013-03-07T17:27:49+00:00 app[pgbackups]: dump_progress: 0B
2013-03-07T17:27:49+00:00 app[pgbackups]:
2013-03-07T17:27:49+00:00 app[pgbackups]: dump_progress: error
Dumping from the main DATABASE_URL seems to work fine though.
Is this a recent change in heroku platform or am I doing anything wrong?
Also, is there a performance hit if I do a dump of the main database?

Backups from followers should be working again now. The failure was due to some changes we made to pgbackups (namely, adding the --serializable-deferrable flag for pg_dump). We missed that this wouldn't work on followers--sorry about that.
Thanks,
Maciek,
Heroku Postgres

Related

PostgreSQL backup in custom format ( -F c) fails during pg_restore ( copy command in log )

We have a PostgreSQL custom format ( -F c ) database backup ~1Gb in size that could not be restored on two of our users machines.
The error that occurs is
:pg_restore: [archiver (db)] error returned by PQputCopyData and in logs there is error in Copy command.
All reports we found with errors in Copy command during pg_restore were related to textual (sql ) backup which is not the case.
Any ideas?
Below is the information that describe the issue in more details:
1. File integrity is ok checked with "Microsoft File Checksum Integrity Verifier"
2. Backup and restore and restore are performed with PostgreSQL 9.6.5 64 bit.
3. Backup format of pg_dump is called
pg_dump -U username -F c -Z 9 mydatabase > myarchive
4. Database on client is created with:
CREATE DATABASE mydatabase WITH TEMPLATE = template0 ENCODING = 'UTF8' OWNER=user;
5. Pg_resote call:
pg_restore.exe -U user --dbname=mydatabase --verbose --no-owner --role=user
6. Example of logs, there are repeating rows with random table errors:
2020-12-07 13:40:56 GMT LOG: checkpoints are occurring too frequently (21 seconds apart)
2020-12-07 13:40:56 GMT HINT: Consider increasing the configuration parameter "max_wal_size".
2020-12-07 13:40:57 GMT ERROR: extra data after last expected column
2020-12-07 13:40:57 GMT CONTEXT: COPY substance, line 21511: "21743 \N 2 1d8c29d2d4dc17ccec4a29710c2f190a e98906e08d4cf1ac23bc4a5a26f83e73 1d8c29d2d4dc17ccec4a297..."
2020-12-07 13:40:57 GMT STATEMENT: COPY substance (id, text_id, storehouse_id, i_tb_id, i_twod_tb_id, tb_id, twod_tb_id, o_smiles, i_smiles_id, i_twod_smiles_id, smiles_id, twod_smiles_id, substance_type)
2020-12-07 13:40:57 GMT FATAL: invalid frontend message type 48
2020-12-07 13:40:57 GMT LOG: PID 105976 in cancel request did not match any process
or
2020-12-07 14:35:42 GMT LOG: checkpoints are occurring too frequently (16 seconds apart)
2020-12-07 14:35:42 GMT HINT: Consider increasing the configuration parameter "max_wal_size".
2020-12-07 14:35:59 GMT LOG: checkpoints are occurring too frequently (17 seconds apart)
2020-12-07 14:35:59 GMT HINT: Consider increasing the configuration parameter "max_wal_size".
2020-12-07 14:36:09 GMT ERROR: invalid byte sequence for encoding "UTF8": 0x00
2020-12-07 14:36:09 GMT CONTEXT: COPY scalar_calculation, line 3859209
2020-12-07 14:36:09 GMT STATEMENT: COPY scalar_calculation (calculator_id, smiles_id, mean_value, remark) FROM stdin;
2020-12-07 14:36:09 GMT FATAL: invalid frontend message type 49
2020-12-07 14:36:10 GMT LOG: PID 109816 in cancel request did not match any process
I am seeing similar behavior on windows 10 pro machines with PG 11.x.
I used pg_dump as suggested above and restored to said machines with psql and had no error.
I also noted that the error shifted around using pg_restore with different "-j" settings. For instance without the setting or "-j 1" pg_restore always fails on the same table and record. Changing to "-j 4" results in that table succeeding to apply the record without error but it occurs on another table.
Changing a particular column to null in the record satisfies the entire restore.
Using pgAdmin 4 to run the restore never produces the error.
Copying the exact command displayed in pgAdmin reproduces the same error:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 32780; 0 5435293 TABLE DATA REDACTED_TABLE_NAME postgres
pg_restore: [archiver (db)] COPY failed for table "REDACTED_TABLE_NAME": ERROR: extra data after last expected column
CONTEXT: COPY mi_gmrfutil, line 117: "REDACTED PLAIN TEXT \N REDACTED PLAIN TEXT \N \N \N \N \N \N REDACTED PLAIN TEXT \N \N REDACTED PLAIN TEXT \N ..."
pg_restore: FATAL: invalid frontend message type 49
I tried using pg_restore version 14 with the same outcome.

PostgreSQL failed with vacuum and autovacuum

Postgres v11.9
There are many errors on Postgres log like this:
2020-09-05 17:35:37 GMT [22464]: #: [6-1] ERROR: uncommitted xmin 636700836 from before xid cutoff 809126794 needs to be frozen
2020-09-05 17:35:37 GMT [22464]: #: [7-1] CONTEXT: automatic vacuum of table "table_nane"
Manual vacuum fails with this error too.
What can I do to fix this error?
Export the database with pg_dump.
Create a new database cluster and restore the dump into it.
Remove the original database cluster.

Postgres is not accepting commands and Vacuum failed due to missing chunk number error

Version: 9.4.4
Exception while inserting a record in health_status.
org.postgresql.util.PSQLException: ERROR: database is not accepting commands to avoid wraparound data loss in database "db"
Hint: Stop the postmaster and vacuum that database in single-user mode.
As indicated in the above error, I tried logging to single-user mode and tried to run full vacuum but instead received below error:
PostgreSQL stand-alone backend 9.4.4
backend> vacuum full;
< 2019-11-06 14:26:25.179 UTC > WARNING: database "db" must be vacuumed within 999999 transactions
< 2019-11-06 14:26:25.179 UTC > HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
< 2019-11-06 14:26:25.215 UTC > ERROR: missing chunk number 0 for toast value xxxx in pg_toast_1234
< 2019-11-06 14:26:25.215 UTC > STATEMENT: vacuum full;
I tried to run vacuum but the same is leading to another error that indicates missing attributes for relid xxxxx
backend> vacuum;
< 2019-11-06 14:27:47.556 UTC > ERROR: catalog is missing 3 attribute(s) for relid xxxxx
< 2019-11-06 14:27:47.556 UTC > STATEMENT: vacuum;
I tried to do a vacuum freeze for the entire db but it is leading to the catalog error again after waiting for sometime.
Furthermore, I tried to run vacuum freeze for a single table which was working fine but when I do the vacuuming for all tables, it probably includes the corrupted one as well and ends up with the same error:
backend> vacuum full freeze
< 2019-11-07 08:54:25.958 UTC > WARNING: database "db" must be vacuumed within 999987 transactions
< 2019-11-07 08:54:25.958 UTC > HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
< 2019-11-07 08:54:26.618 UTC > ERROR: missing chunk number 0 for toast value xxxxx in pg_toast_xxxx
< 2019-11-07 08:54:26.618 UTC > STATEMENT: vacuum full freeze
Is there a way to figure out the corrupted table and a way to restore the integrity of the database so the application can access the rest of the database?
P.S. I do not have a backup to restore the data so deleting the corrupted data or somehow fixing it would be the only solution here.

PostgreSQL's pg_restore error while restoring

I took a backup of database from remote server using pg_dump from my local in directory format. The version of postgres remote server's is 9.3.23 and my postgres local version is 9.6. So, when i am trying to restore, the data gets restored but it's throwing few errors.
This is the command I used for dumping:
pg_dump -h 172.16.0.70 -U postgres -d enet -n finance -Fd -j5 -f fin
This is the command i used for restoring:
pg_restore -h 172.16.0.70 -U postgres -d newdb08aug19 -j5 fin
-- Dumped from database version 9.3.23
-- Dumped by pg_dump version 9.6.14
pg_restore: [archiver (db)] Error while INITIALIZING:
pg_restore: [archiver (db)] could not execute query: ERROR: unrecognized configuration parameter "idle_in_transaction_session_timeout"
Command was: SET idle_in_transaction_session_timeout = 0;
pg_restore: [archiver (db)] could not execute query: ERROR: unrecognized configuration parameter "row_security"
Command was: SET row_security = off;

Postgres Pg_Dump failed due to non english character on database name

I have created a postgres data base by giving non ascii name example: "Química"
When I tried to take back up of the same database using following command I am getting the exception
pg_dump: [archiver (db)] connection to database "Química" failed: FATAL: database "Química" does not exist.
Command used:
--host 127.0.0.1 --port 7555 --username postgres --format=d --verbose --jobs=10 --compress 9 --file "C:\Delete\Archive" --dbname "Química" --schema "Archive_TVT"