How to backup/recover database after a "missing chunk number 0" message? - postgresql

PostgreSQL. When I try to backup the database via pgAdmin, I get error
pg_dump: error: Dumping the contents of table "filearchives" failed: PQgetResult() failed.
pg_dump: error: Error message from server: ������: missing chunk number 0 for toast value 1508672 in pg_toast_1245716
pg_dump: error: The command was: COPY public.filearchives (id, usedtime, docid, options, filesize, filename, stream) TO stdout;"
I followed these instructions: https://gist.github.com/supix/80f9a6111dc954cf38ee99b9dedf187a
There were similar errors for REINDEX and VACUUM on the main table (filearchives), but nothing for pg_toast and I was able to select all rows in the main table (filearchives), while there was supposed to be "Corrupted chunk read!".
When I do it via psql, there is no error message, but the backup is incomplete.
The database itself is working fine at the moment.
How to recover the database without removing records? Is there a way to create a backup?

Related

ERROR: attempted to read an unexpected stripe while reading columnar table daily_p2022_10_11, stripe with id=281134 is not flushed

I am getting this error in postgres alert log:
ERROR: attempted to read an unexpected stripe while reading columnar table daily_p2022_10_11, stripe with id=281134 is not flushed.
While trying vacuum, getting below :
ccap_proddb=> vacuum verbose daily_p2022_10_11;
ERROR: invalid columnar chunk entry
DETAIL: Chunk number out of range: 0
Not able to execute - Select count(*) from daily_p2022_10_11;
We have postgres 14.2 and citus 11.0-1

data corrupted in postgres - right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx"

I am new to Postgres and we are using it for tests reports, we had an issue with our environment that entered duplicate keys to one of the table and since then we are getting this message when trying to run migration scripts:
error: migration failed: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx" in line 0: UPDATE log SET project_id = (SELECT project_id FROM item_project WHERE item_project.item_id=log.item_id LIMIT 1); (details: pq: right sibling's left-link doesn't match: block 9550 links to 12028 instead of expected 12027 in index "log_attach_id_idx")
I tried to run pg_dump and got this error:
pg_dump: error: query was: SELECT pg_catalog.pg_get_viewdef('457544'::pg_catalog.oid) AS viewdef
pg_dumpall: error: pg_dump failed on database "reportportal", exiting
Can anyone help here?
Restore your backup, and research what parameters you changed and what you did to end up with data corruption in the first place.

Postgres, I am getting ERROR: unexpected chunk number 0 (expected 1) for toast value 12599063 in pg_toast_16687

I am new to Postgres and one of my reports that is using select and extracting JSON return the following error.
ERROR: unexpected chunk number 0 (expected 1) for toast value 12599063 in pg_toast_16687
SQL state: XX000
I do not know how to proceed in fixing my query. Any idea?
Run this command:
select reltoastrelid::regclass from pg_class where relname = 'table_name';
Where the table_name is where the error occur. Then check the result if it is the same toast# like pg_toast.pg_toast_XXXXX. Mine happen to be 16687
Then run these commands to reindex:
REINDEX table table_name;
REINDEX table pg_toast.pg_toast_16687;
VACUUM analyze table_name;
That is data corruption:
restore from backup
upgrade to the latest PostgreSQL minor release
check the hardware

Error while taking backup in Postgresql (Could not read Block X of relation base/Y/Z)

while taking a backup from my PostgrSQL Database
it showing that
pg_dump: Dumping the contents of table "gtab17" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: invalid page header in block 9576 of relation base/17779/758869
pg_dump: The command was: COPY public.gtab17 (jrdetid, jrmid, acid, dr, cr, narr, ageamt) TO stdout;
i think my table gtab17 is corrupt
tried
Vaccum Full error on this table
INFO: vacuuming "public.gtab17" ; ERROR: row is too big: size 3256104, maximum size 8160
Analyze error
INFO: analyzing "public.gtab17" ;
ERROR: invalid page header in block 9576 of relation base/17779/758869
Database : PostgreSQL 9.2
OS : Windows XP SP3 ; FILESYSTEM : NTFS
i have googled but dint get any solution to solve this
It means, so your data file is corrupted - a solution is relative difficult - the best way is recovery from some older backup. You can try to fix it with replacing broken data page by zeroes - but you lost some data, and without some deeper knowledgeable you can destroy more than now it is.
REFER

Postgres WARNING: errors ignored on restore: 59

I'm using the pg:transfer utility recommended by Heroku to push and pull databases. For example:
heroku pg:transfer -f postgres://username:password#localhost/database-name -t postgres://user-name:password#host-name/database-name --confirm app-name
I have been able to do it successfully, but each time it states that error were ignored at the end of the transfer:
WARNING: errors ignored on restore: 59
Do I need to worry about this?
EDIT:
I went through my output and it seems to error on each table. It seems to drop the sequence, and then throw an error saying it does not exist.
pg_restore: dropping SEQUENCE OWNED BY roles_id_seq
pg_restore: dropping SEQUENCE roles_id_seq
pg_restore: [archiver (db)] Error from TOC entry 170; 1259 35485 SEQUENCE roles_id_seq postgres
pg_restore: [archiver (db)] could not execute query: ERROR: sequence "roles_id_seq" does not exist Command was: DROP SEQUENCE public.roles_id_seq;
My guess is that what is happening is that it is running a "clean" restore which means it drops the previous objects just to be sure and then recreates them.
If these are your only errors they are entirely safe to ignore. Too bad the toolchain is not smart enough to add an IF EXISTS to the drop commands.