We have a large PostgreSQL dump with hundreds of tables that I can successfully import with pg_restore. We are developing a software that inserts into a lot of these tables (~100) and for every run we need to return these tables to their original state (that means to the content that was in the dump). Restoring the original dump again takes a lot of time and we just can't wait for half an hour before every debugging session. So I need a relatively fast way to revert these tables to the state they are in after restoring from the dump.
I've tried using pg_restore with -L switch and selecting these tables but I get either a duplicate key error when using both --data-only and --clean or a "cannot drop table X because other objects depend on it" error when using only --clean. Issuing a SET CONSTRAINTS ALL DEFERRED command before pg_restore did not work either. Maybe I have the rows in the table list all wrong, right now it's
491; 1259 39623998 TABLE public some_table some_user
8021; 0 0 COMMENT public TABLE some_table some_user
8022; 0 0 ACL public some_table some_user
for every table and then
6700; 0 39624062 TABLE DATA public some_table postgres
8419; 0 0 SEQUENCE SET public some_table_pk_id_seq some_user
for every table.
We only insert data and don't update existing rows so deleting all rows above an index and resetting the sequences might work, but I really don't want to have to manually create these commands for all the hundred tables and I'm not even sure it would work even if I set cascade to delete other objects depending on the given row.
Does anyone have any better idea how to handle this?
So you are looking for something like a snapshot in order to be able to revert quickly to a certain state.
I am not aware of a possiblity in PostgreSql to rollback to a certain timestamp.
While searching for a solution, I've found two ideas here
Use create database with the template option
Virtualize your PostgreSql installation using VMWare or VirtualBox, and use the snapshot feature of the virtual machines.
Again, both ideas are copied from the above source (I have search for "postgresql db snapshots").
You can use PITR to create a snapshot before loads and use the PITR snapshot to take you back to any point that you have the logs for.
Related
At our company we had a DB crash a few days ago due to hardware reasons. We recovered from that but since then we're having this following error every time we try to back up our DB.
pg_dump: ERROR: timestamp out of range
pg_dump: SQL command to dump the contents of table "account_bank_statement_line"
The error is in "account_bank_statement_line" table, where we have 5 rows created with only the 'create_date' column has a date of year 4855(!!!!), the rest of the columns have null value, even the id (primary key). We can't even delete or update those rows using pgAdmin 4 or PostgreSQL terminal.
We're in a very risky stage right now with no back up of few days of retail sales. Any hints at least would be very highly appreciated.
First, if the data are important, hire a specialist.
Second, run your pg_dump with the option --exclude-table=account_bank_statement_line so that you at least have a backup of the rest of your database.
The next thing you should do is to stop the database and take a cold backup of all the files. That way you have something to go back to if you mess up.
The key point to proceed is to find out the ctids (physical addresses) of the problematic rows. Then you can use that to delete the rows.
You can approach that by running queries like
SELECT create_date FROM account_bank_statement_line
WHERE ctid < '(42,0)';
and try to find the ctids where you get an error. Once you have found a row where the following falls over:
SELECT * FROM account_bank_statement_line
WHERE ctid = '(42,14)';
you can delete the row by its ctid.
Once you are done, take a pg_dumpall of the database cluster, create a new one and restore the dump. It is dangerous to continue working with a cluster that has experienced corruption, because corruption can remain unseen and spread.
I know what we did might not be the most technically advanced, but it solved our issue. We consulted a few experts and what we did was:
migrated all the data to a new table (account_bank_statement_line2), this transferred all the rows that had valid data.
Then we deleted the "account_bank_statement_line" table and
renamed the new table to "account_bank_statement_line".
After that we could DROP the table.
Then the db backup ran smoothly like always.
Hope this helps anyone who's in deep trouble like us. Cheers!
I am new to PostgreSQL. I have a database name employee (id , name, address , Phonenumber , salary). I would like to make a backup of the employee details if anyone of Phno,addres and salary is changed.
Is there any way of doing it using pg_dump or I should be satisfied with trigger method that output original Tuples onto another Table say Backup if any changes are made .
Please , if someone could elaborate in detailed manner how to get start with this using pg_dump.
pg_dump scripts out the current state of the database. That's all it does, with some fine-tuning to let you get at individual tables, schemas, etc. It does not watch for changes, it does not work at the row level (barring some zany row-level security setup), and it is not an audit log.
What you're describing -- backing up individual rows when they're modified -- is an audit log, so pg_dump is the wrong tool for the job. An update trigger which inserts the original row into an audit table is the canonical way to accomplish this, so you're on the right track there. If you need to generate scripts of the audit table, that's where pg_dump comes in.
I have a postgres database, I am trying to backup a table with :
pg_dump --data-only --table=<table> <db> > dump.sql
Then days later I am trying to overwrite it (basically want to erase all data and add the data from my dump) by:
psql -d <db> -c --table=<table> < dump.sql
But It doesn't overwrite, it adds on it without deleting the existing data.
Any advice would be awesome, thanks!
You have basically two options, depending on your data and fkey constraints.
If there are no fkeys to the table, then the best thing to do is to truncate the table before loading it. Note that truncate behaves a little odd in transactions so the best thing to do is (in a transaction block):
Lock the table
Truncate
Load
This will avoid other transactions seeing an empty table.
If you have fkeys then you may want to load into a temporary table and then do an upsert. In this case you may still want to lock the table to avoid a race condition if it is possible other transactions may want to write to the table (also in a transaction block):
Load data into a temporary table
Lock the destination table (optional, see above)
use a writeable cte to "upsert" in the table.
Use a separate delete statement to delete data from the table.
Stage 3 is a little tricky. You might need to ask a separate question about it, but basically you will have two stages (and write this in consultation with the docs):
Update existing records
Insert non-existing records
Hope this helps.
I am trying to
create a snapshot of a PostgreSQL database (using pg_dump),
do some random tests, and
restore to the exact same state as the snapshot, and do some other random tests.
These can happen over many/different days. Also I am in a multi-user environment where I am not DB admin. In particular, I cannot create new DB.
However, when I restore db using
gunzip -c dump_file.gz | psql my_db
changes in step 2 above remain.
For example, if I make a copy of a table:
create table foo1 as (select * from foo);
and then restore, the copied table foo1 remains there.
Could some explain how can I restore to the exact same state as if step 2 never happened?
-- Update --
Following the comments #a_horse_with_no_name, I tried to to use
DROP OWNED BY my_db_user
to drop all my objects before restore, but I got an error associated with an extension that I cannot control, and my tables remain intact.
ERROR: cannot drop sequence bg_gid_seq because extension postgis_tiger_geocoder requires it
HINT: You can drop extension postgis_tiger_geocoder instead.
Any suggestions?
You have to remove everything that's there by dropping and recreating the database or something like that. pg_dump basically just makes an SQL script that, when applied, will ensure all the tables, stored procs, etc. exist and have their data. It doesn't remove anything.
You can use PostgreSQL Schemas.
I have a problem encountered lately in our Postgres database, when I query: select * from myTable,
it results to, 'could not open relation with OID 892600370'. And it's front end application can't run properly anymore. Base on my research, I determined the column that has an error but I want exactly to locate the rows OID of the column so that I can modify it. Please help.
Thank you in advance.
You've got a corrupted database. Might be a bug, but more likely bad hardware. If you have a recent backup, just use that. I'm guessing you don't though.
Make sure you locate any backups of either the database or its file tree and keep them safe.
Stop the PostgreSQL server and take a file backup of the entire database tree (base, global, pg_xlog - everything at that level). It is now safe to start fiddling...
Now, start the database server again and dump tables one at a time. If a table won't dump, try dropping any indexes and foreign-key constraints and give it another go.
For a table that won't dump, it might be just certain rows. Drop any indexes and dump a range of rows using COPY ... SELECT. That should let you narrow down any corrupted rows and get the rest.
Now you have a mostly-recovered database, restore it on another machine and take whatever steps are needed to establish what is damaged/lost and what needs to be done.
Run a full set of tests on the old machine and see if anything needs replacement. Consider whether your monitoring needs improvement.
Then - make sure you keep proper backups next time, that way you won't have to do all this, you'll just use them instead.
could not open relation with OID 892600370
A relation is a table or index. A relation's OID is the OID of the row in pg_class where this relation is defined.
Try select relname from pg_class where oid=892600370;
Often it's immediately obvious from relname what this relation is, otherwise you want to look at the other fields in pg_class: relnamespace, relkind,...