Odoo 10 is not backing up DB in PostgreSQL 9.5. Shows "SQL state: 22008. Timestamp out of range on account_bank_statement_line." - postgresql

At our company we had a DB crash a few days ago due to hardware reasons. We recovered from that but since then we're having this following error every time we try to back up our DB.
pg_dump: ERROR: timestamp out of range
pg_dump: SQL command to dump the contents of table "account_bank_statement_line"
The error is in "account_bank_statement_line" table, where we have 5 rows created with only the 'create_date' column has a date of year 4855(!!!!), the rest of the columns have null value, even the id (primary key). We can't even delete or update those rows using pgAdmin 4 or PostgreSQL terminal.
We're in a very risky stage right now with no back up of few days of retail sales. Any hints at least would be very highly appreciated.

First, if the data are important, hire a specialist.
Second, run your pg_dump with the option --exclude-table=account_bank_statement_line so that you at least have a backup of the rest of your database.
The next thing you should do is to stop the database and take a cold backup of all the files. That way you have something to go back to if you mess up.
The key point to proceed is to find out the ctids (physical addresses) of the problematic rows. Then you can use that to delete the rows.
You can approach that by running queries like
SELECT create_date FROM account_bank_statement_line
WHERE ctid < '(42,0)';
and try to find the ctids where you get an error. Once you have found a row where the following falls over:
SELECT * FROM account_bank_statement_line
WHERE ctid = '(42,14)';
you can delete the row by its ctid.
Once you are done, take a pg_dumpall of the database cluster, create a new one and restore the dump. It is dangerous to continue working with a cluster that has experienced corruption, because corruption can remain unseen and spread.

I know what we did might not be the most technically advanced, but it solved our issue. We consulted a few experts and what we did was:
migrated all the data to a new table (account_bank_statement_line2), this transferred all the rows that had valid data.
Then we deleted the "account_bank_statement_line" table and
renamed the new table to "account_bank_statement_line".
After that we could DROP the table.
Then the db backup ran smoothly like always.
Hope this helps anyone who's in deep trouble like us. Cheers!

Related

Select query doesn't work with specific column condition in postgresql

Background:
PostgreSQL service faced some corruption after the server power outage. And I used the pg_resetwal command to fix that issue. As suggested here
After the service successfully starts I'm facing this weird issue.
When I query with the id column, it doesn't result anything even the queried data is there and column type matches
# SELECT id, email FROM users WHERE id=1;
id | email
----+-------
(0 rows)
But if I query with other columns (in this example, email column), it results
# SELECT id, email FROM users WHERE email='john#gmail.com';
id | email
----+--------------------------
1 | john#gmail.com
(1 row)
Any suggestions?
Postgresql version: 12.7
OK, so after you have managed to get your PostgreSQL instance running what you should have done is:
Take an immediate backup of all your databases
Audit them + check for any damage
Drop the existing dbs and restore from the audited backups
Identify the mis-configuration in your server that resulted in the data corruption.
I assume you haven't done these things and have a corrupted index.
Your hardware is lying to PostgreSQL about persisting data to disk. It isn't safe to trust the existing data - anything that was being updated (not just directly updated by you, but vacuum processess too) is suspect.

PostgreSQL select query with where clause 'COLUMN_1 is null' hangs

Recently we restored PostgreSQL database from the backup which was created without stopping database (I know this was very wrong and now we are paying the price). The backup was simple database directory backup.
Now we noticed that when we execute
select *
from table
where COLUMN_1 is null
query in one of our tables the query just hangs (freezes) and never finishes. Other queries on the same table run fine and distinct(COLUMN_1) returns all the values. The same query runs correctly on the other column COLUMN_2 is null. It seems there is something wrong with that one column.
How can I repair such possibly damaged table?
Dump the whole database with pg_dump and restore it to a new cluster. If that works, it will get rid of all data corruption.
If that fails, you should hire a specialist.
If you attach to the hanging backend with a debugger, you can investigate what it is doing (if you are familiar with the source).
I ran vacuum full and it solved my problem. The query now executes with is null clause on that table.

Postgresql Corrupted pg_catalog table

I've been running a postgres database on an external hard drive and it appears it got corrupted after reconnecting it to a sleeping laptop that THOUGHT the server was still running. After running a bunch of reindex commands to fix some other errors I'm now getting the below error.
ERROR: missing chunk number 0 for toast value 12942 in pg_toast_2618
An example of a command that returns this error is:
select table_name, view_definition from INFORMATION_SCHEMA.views;
I've run the command "select 2618::regclass;" that gives you the problem table. However reindexing this doesn't seem to solve the problem. I see a lot of suggestions out there about finding the corrupted row and deleting it. However, the table that appears to have corruption in my instance is pg_rewrite and it appears to NOT be a corrupted row but a corrupted COLUMN.
I've run the following commands, but they aren't fixing the problem.
REINDEX table pg_toast.pg_toast_$$$$;
REINDEX table pg_catalog.pg_rewrite;
VACUUM ANALYZE pg_rewrite; -- just returns succeeded.
I can run the following SQL statement and it will return data.
SELECT oid, rulename, ev_class, ev_type, ev_enabled, is_instead, ev_qual FROM pg_rewrite;
However, if I add the ev_action column to the above query it throws a similar error of:
ERROR: missing chunk number 0 for toast value 11598 in pg_toast_2618
This error appears to affect all schema related queries to things like INFORMATION_SCHEMA tables. Luckily it seems as though all of my tables and data in my tables are fine but I cannot query the sql that generates those tables and any views I have created seem inaccessible (although I've noticed I can create new views).
I'm not familiar enough with Postgresql to know exactly what pg_rewrite is, but I'm guessing I can't just truncate the data in the table or set ev_action = null.
I'm not sure what to do next with the information I've gathered so far.
(At least) your pg_rewrite catalog has data corruption. This table contains the definition of all views, including system views that are necessary for the system to work.
The best thing to do is to restore a backup.
You won't be able to get the database back to work, the best you can do is to salvage as much of the data as you can.
Try a pg_dump. I don't know off-hand if that needs any views, but if it works, that is good. You will have to explicitly exclude all views from the dump, else it will most likely fail.
If that doesn't work, try to use COPY for each table to get at least the data out. The metadata will be more difficult.
If this is an important database, hire an expert.

How to find the OID of the rows in a table in postgres?

I have a problem encountered lately in our Postgres database, when I query: select * from myTable,
it results to, 'could not open relation with OID 892600370'. And it's front end application can't run properly anymore. Base on my research, I determined the column that has an error but I want exactly to locate the rows OID of the column so that I can modify it. Please help.
Thank you in advance.
You've got a corrupted database. Might be a bug, but more likely bad hardware. If you have a recent backup, just use that. I'm guessing you don't though.
Make sure you locate any backups of either the database or its file tree and keep them safe.
Stop the PostgreSQL server and take a file backup of the entire database tree (base, global, pg_xlog - everything at that level). It is now safe to start fiddling...
Now, start the database server again and dump tables one at a time. If a table won't dump, try dropping any indexes and foreign-key constraints and give it another go.
For a table that won't dump, it might be just certain rows. Drop any indexes and dump a range of rows using COPY ... SELECT. That should let you narrow down any corrupted rows and get the rest.
Now you have a mostly-recovered database, restore it on another machine and take whatever steps are needed to establish what is damaged/lost and what needs to be done.
Run a full set of tests on the old machine and see if anything needs replacement. Consider whether your monitoring needs improvement.
Then - make sure you keep proper backups next time, that way you won't have to do all this, you'll just use them instead.
could not open relation with OID 892600370
A relation is a table or index. A relation's OID is the OID of the row in pg_class where this relation is defined.
Try select relname from pg_class where oid=892600370;
Often it's immediately obvious from relname what this relation is, otherwise you want to look at the other fields in pg_class: relnamespace, relkind,...

PostgreSQL reset tables to original state

We have a large PostgreSQL dump with hundreds of tables that I can successfully import with pg_restore. We are developing a software that inserts into a lot of these tables (~100) and for every run we need to return these tables to their original state (that means to the content that was in the dump). Restoring the original dump again takes a lot of time and we just can't wait for half an hour before every debugging session. So I need a relatively fast way to revert these tables to the state they are in after restoring from the dump.
I've tried using pg_restore with -L switch and selecting these tables but I get either a duplicate key error when using both --data-only and --clean or a "cannot drop table X because other objects depend on it" error when using only --clean. Issuing a SET CONSTRAINTS ALL DEFERRED command before pg_restore did not work either. Maybe I have the rows in the table list all wrong, right now it's
491; 1259 39623998 TABLE public some_table some_user
8021; 0 0 COMMENT public TABLE some_table some_user
8022; 0 0 ACL public some_table some_user
for every table and then
6700; 0 39624062 TABLE DATA public some_table postgres
8419; 0 0 SEQUENCE SET public some_table_pk_id_seq some_user
for every table.
We only insert data and don't update existing rows so deleting all rows above an index and resetting the sequences might work, but I really don't want to have to manually create these commands for all the hundred tables and I'm not even sure it would work even if I set cascade to delete other objects depending on the given row.
Does anyone have any better idea how to handle this?
So you are looking for something like a snapshot in order to be able to revert quickly to a certain state.
I am not aware of a possiblity in PostgreSql to rollback to a certain timestamp.
While searching for a solution, I've found two ideas here
Use create database with the template option
Virtualize your PostgreSql installation using VMWare or VirtualBox, and use the snapshot feature of the virtual machines.
Again, both ideas are copied from the above source (I have search for "postgresql db snapshots").
You can use PITR to create a snapshot before loads and use the PITR snapshot to take you back to any point that you have the logs for.