postgresql how to backup and overwrite specific tables - postgresql

I need to be able to somehow get a set of tables from my dev db into my production db. I've just been creating a dump file from the dev db and using pg_restore on the production db. The problem now is that I need to preserve one table(called users) on the production db while replacing the others
I think I have the dump properly from this command
pg_dump -Fc --no-acl --no-owner -h localhost -U <USER> --exclude-table=users* --data-only <DB NAME> > test.dump
But I can't get the restore part to work. I tried the following command
pg_restore -Fc --no-acl --no-owner -h <PROD HOST> -U <USER> -d <DB NAME> -p <PORT> <FILE LOCATION>
BUt I get the following errors
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2009; 0 121384 TABLE DATA idx_descs Jason
pg_restore: [archiver (db)] COPY failed for table "idx_descs": ERROR: duplicate key value violates unique constraint "idx_descs_pkey"
DETAIL: Key (id)=(6) already exists.
CONTEXT: COPY idx_descs, line 1
It seems like for the tables I'm trying to overwrite, it is just trying to append the data and running into trouble because there are now duplicate primary keys. Any Ideas how to do this? Thanks

So you need to reassign primary keys?
You could try restoring to a temporary table (say for instance, in failing case: idx_desc_temp), then doing something like:
with t as ( select * from idx_descs_temp )
insert into idx_descs
select id + 100000 [or whatever], [other fields] from t;
Afterwards you need to reset sequences (if applicable -- fill in sequence name....):
select setval( 'idx_descs_id_seq'::regclass, 100000 + [suitable increment]);
If you have a large # of tables you could try to automate using the system catalog.
Note though that you also have to renumber foreign key refs. Possibly less pain would be to move data in production db first. If you are using an ORM, you could also automate via application APIs.

Related

pg_restore throws error when trying to restore a table with constraints

I try dumping tables from a production environment to a dev one. However, when dumping and restoring this table, using the following command:
pg_restore --no-owner --no-acl --clean --if-exists -d database dump_file.dump
I get an error stating that I can't drop that table unless I use something like CASCADE (i.e. dropping all other tables that depend on that one). Is there a way to determine the tables to be dropped? is there a way of maybe state in the pg_dump command to dump the table I'm looking to dump and all related tables ?
Here's the error raised:
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 4066; 2606 30526 CONSTRAINT table1 pkey user
pg_restore: error: could not execute query: ERROR: cannot drop constraint pkey on table public.table1 because other objects depend on it
DETAIL: constraint id_fkey on table public.dag depends on index public.pkey
constraint id_fkey on table public.dag depends on index public.pkey
HINT: Use DROP ... CASCADE to drop the dependent objects too...
You have a table on the dev database that has a pkey that is dependent and therefore can not be dropped before the restore. This is proper behavior.
I am not seeing dumping/restoring a particular table. You are dumping/restoring the entire database.
If you want recreate the production database as a dev one then do:
pg_restore -C --no-owner --no-acl --clean --if-exists -d postgres dump_file.dump
The -C with --clean will DROP DATABASE db_name and then rebuild it from scratch by connecting to the database postgres to do the DROP/CREATE db_name and then connect to db_name to load the rest of the objects.
This is the best way to clean out cruft and start at a consistent state.
UPDATE
Update your question with the pg_dump command so it is evident what you are doing.
If you want to see whether a particular table has dependencies, in the original database use psql and do \d the_table to see what the dependencies to and from the table are. If you tell pg_dump to dump a single table it will dump just that table. It will not follow dependencies and dump those also. That is up to you to do.
Look into using a schema management tool to do your changes/migrations. I use Sqitch for this.

pg_restore for 9.4 fails with error 'could not execute query'

The PostgreSQL version I am trying to restore is 9.4.10. I backed up a database from the same version. The command I am using for restore is:
/opt/PostgreSQL/9.4/bin/pg_restore -U postgres --port=5432 -v --dbname=mydb < /backups/309646/WAL/pipe_309646
The error I get is:
pg_restore: executing BLOB 71197
pg_restore: [archiver (db)] Error from TOC entry 3822; 2613 71197 BLOB 71197 user
pg_restore: [archiver (db)] could not execute query: ERROR: duplicate key value violates unique constraint "pg_largeobject_metadata_oid_index"
DETAIL: Key (oid)=(71197) already exists.
Command was: SELECT pg_catalog.lo_create('71197');
This is repeated in the pg errors some 112 times. I need to restore to the same database. Here is the command I used for dumping:
/opt/PostgreSQL/9.4/bin/pg_dump" -U postgres -Fc -b --port=5432 'mydb' -t public.users > /backups/309646/WAL/pipe_309646
Any indication as to why this is happening? How can I mitigate this error?
You are trying to restore a large object into a database that already contains a large object with the same oid.
Use a new database that does not contain any large objects yet as target for the restore.
Alternatively, drop the large objects first with
SELECT lo_unlink(oid) FROM pg_largeobject_metadata;

How migrate PostgreSQL database from virtual machine to production server?

i tried to migrate database from virtual machine to production server (both working on Ubuntu)
but i faced a lot of problems first when i tired to create backup file with that command
pg_dump mydb --file=db.dump --host=localhost --username=admin
this errors show up
pg_dump: [archiver (db)] query failed: ERROR: permission denied for schema topology
pg_dump: [archiver (db)] query was: LOCK TABLE topology.topology IN ACCESS SHARE MODE
then i tried this command and it goes well
pg_dump -Fc mydb > db.dump
and when i tried to restore db to production server i used this command(after creating an user and a database with that user)
psql -d mydb --file=db.dump
this error show up
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.
Then I use this command to restore it
pg_restore -d mydb db.dump
and it go well but when I run the server using this command
python manage.py runserver
This error shows up
return value.replace(b'\0', b'').decode()
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 1-2: invalid continuation byte
Try this as a superuser. Apparently "admin" is not a superuser.
Your first attempt is creating a plain-text backup, which you can restore using "psql". Your second attempt ("-Fc") creates a custom format backup, and you need "pg_restore" to restore it.
And don't forget to migrate the global objects (roles, tablespaces ect), using "pg_dumpall -g".

Fixing corrupt table

I was trying to upgrade Sentry and a table in my database got corrupt.
After reading about vacuum and reindex I was able to track down the issue to a single table.
Doing a select * from any other table works just fine, but this particular one seems to be problematic. Is there a way I can fix the table, or, worst case scenario, dump all other tables somehow?
pg_dump -T corrupt_table > bkp.sql doesn't work:
bash-4.4# pg_dump -U XXXXXX -T sentry_identityprovider sentry > bkp.sql
pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for attribute 1 of relation 45941
pg_dump: [archiver (db)] query was: SELECT tableoid, oid, conname, confrelid, pg_catalog.pg_get_constraintdef(oid) AS condef FROM pg_catalog.pg_constraint WHERE conrelid = '45954'::pg_catalog.oid AND contype = 'f'
Please avoid comments like "Well, go get your backups". I'm asking because I don't have a backup.
Also, please avoid comments like "Well, if you don't have backups, shit happens". I'm asking because there was an error in the execution of the backups and none were made.
Also, please avoid any other helpless comments related to backups. Really. You're not helping me that way.
At some stage I have been able to dump and restore an individual table as per below. Note in help for pg_dump that you should be able to do a full dump and exclude the corrupt table, never tried it. Not sure why yours fails, dumping a single good table might give the answer. Hope it works for you.
pg_dump -t good_table old_DB -U youruser -f good_table_BUP.sql
psql -f good_table_BUP.sql new_DB
-t, --table=TABLE dump the named table(s) only
-T, --exclude-table=TABLE do NOT dump the named table(s)

postgres restore reporting spurious primary key issue

I'm trying to restore a Postgres database, with the following command:
pg_restore --verbose -h localhost -p 5436 -d my_database --format=c --clean --no-owner --no-privileges --no-tablespaces --jobs=1 Sunday2.backup
During the restore, I see this error:
pg_restore: [archiver (db)] Error from TOC entry 2540; 0 16531 TABLE DATA foo_searchresult uoj9bg6vn4cqm
pg_restore: [archiver (db)] COPY failed for table "foo_searchresult": ERROR: duplicate key value violates unique constraint "foo_searchresult_pkey"
DETAIL: Key (id)=(63) already exists.
CONTEXT: COPY foo_searchresult, line 1
I went back to the source database and ran this:
select id, count(*)
from foo_searchresult
group by id
having count(*) > 1
and got nothing.
Now if I just try to restore that one table to a brand-new database:
pg_restore --verbose -h localhost -p 5436 -d brand_new_database --format=c --clean --no-owner --no-privileges --no-tablespaces --jobs=1 -t foo_searchresult Sunday2.backup
it comes back clean.
UPDATE: I just tried restoring the ENTIRE backup to a brand-new database, and it seems to have made it past foo_searchresult without issue.
(Incidentally, the source database is 9.4, and the target database is 9.5, but I get the same results using the pg_restore from a 9.4 or 9.5 distribution.)
UPDATE: So it seems that dropping the database, then creating an empty one and re-loading that (rather than using the --clean flag) resolved a whole multitude of issues
Anyway, my question was "Has anyone seen this before, or have any idea how to fix it."