I'm trying to restore a Postgres database, with the following command:
pg_restore --verbose -h localhost -p 5436 -d my_database --format=c --clean --no-owner --no-privileges --no-tablespaces --jobs=1 Sunday2.backup
During the restore, I see this error:
pg_restore: [archiver (db)] Error from TOC entry 2540; 0 16531 TABLE DATA foo_searchresult uoj9bg6vn4cqm
pg_restore: [archiver (db)] COPY failed for table "foo_searchresult": ERROR: duplicate key value violates unique constraint "foo_searchresult_pkey"
DETAIL: Key (id)=(63) already exists.
CONTEXT: COPY foo_searchresult, line 1
I went back to the source database and ran this:
select id, count(*)
from foo_searchresult
group by id
having count(*) > 1
and got nothing.
Now if I just try to restore that one table to a brand-new database:
pg_restore --verbose -h localhost -p 5436 -d brand_new_database --format=c --clean --no-owner --no-privileges --no-tablespaces --jobs=1 -t foo_searchresult Sunday2.backup
it comes back clean.
UPDATE: I just tried restoring the ENTIRE backup to a brand-new database, and it seems to have made it past foo_searchresult without issue.
(Incidentally, the source database is 9.4, and the target database is 9.5, but I get the same results using the pg_restore from a 9.4 or 9.5 distribution.)
UPDATE: So it seems that dropping the database, then creating an empty one and re-loading that (rather than using the --clean flag) resolved a whole multitude of issues
Anyway, my question was "Has anyone seen this before, or have any idea how to fix it."
Related
I try dumping tables from a production environment to a dev one. However, when dumping and restoring this table, using the following command:
pg_restore --no-owner --no-acl --clean --if-exists -d database dump_file.dump
I get an error stating that I can't drop that table unless I use something like CASCADE (i.e. dropping all other tables that depend on that one). Is there a way to determine the tables to be dropped? is there a way of maybe state in the pg_dump command to dump the table I'm looking to dump and all related tables ?
Here's the error raised:
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 4066; 2606 30526 CONSTRAINT table1 pkey user
pg_restore: error: could not execute query: ERROR: cannot drop constraint pkey on table public.table1 because other objects depend on it
DETAIL: constraint id_fkey on table public.dag depends on index public.pkey
constraint id_fkey on table public.dag depends on index public.pkey
HINT: Use DROP ... CASCADE to drop the dependent objects too...
You have a table on the dev database that has a pkey that is dependent and therefore can not be dropped before the restore. This is proper behavior.
I am not seeing dumping/restoring a particular table. You are dumping/restoring the entire database.
If you want recreate the production database as a dev one then do:
pg_restore -C --no-owner --no-acl --clean --if-exists -d postgres dump_file.dump
The -C with --clean will DROP DATABASE db_name and then rebuild it from scratch by connecting to the database postgres to do the DROP/CREATE db_name and then connect to db_name to load the rest of the objects.
This is the best way to clean out cruft and start at a consistent state.
UPDATE
Update your question with the pg_dump command so it is evident what you are doing.
If you want to see whether a particular table has dependencies, in the original database use psql and do \d the_table to see what the dependencies to and from the table are. If you tell pg_dump to dump a single table it will dump just that table. It will not follow dependencies and dump those also. That is up to you to do.
Look into using a schema management tool to do your changes/migrations. I use Sqitch for this.
The PostgreSQL version I am trying to restore is 9.4.10. I backed up a database from the same version. The command I am using for restore is:
/opt/PostgreSQL/9.4/bin/pg_restore -U postgres --port=5432 -v --dbname=mydb < /backups/309646/WAL/pipe_309646
The error I get is:
pg_restore: executing BLOB 71197
pg_restore: [archiver (db)] Error from TOC entry 3822; 2613 71197 BLOB 71197 user
pg_restore: [archiver (db)] could not execute query: ERROR: duplicate key value violates unique constraint "pg_largeobject_metadata_oid_index"
DETAIL: Key (oid)=(71197) already exists.
Command was: SELECT pg_catalog.lo_create('71197');
This is repeated in the pg errors some 112 times. I need to restore to the same database. Here is the command I used for dumping:
/opt/PostgreSQL/9.4/bin/pg_dump" -U postgres -Fc -b --port=5432 'mydb' -t public.users > /backups/309646/WAL/pipe_309646
Any indication as to why this is happening? How can I mitigate this error?
You are trying to restore a large object into a database that already contains a large object with the same oid.
Use a new database that does not contain any large objects yet as target for the restore.
Alternatively, drop the large objects first with
SELECT lo_unlink(oid) FROM pg_largeobject_metadata;
When trying to make a postgreSQL database dump we got the following error and the process stops immediately.
Command used:
openbravo#master.akluck.com:~
07/26 11:48:11> pg_dump -U tad -h localhost -p 5932 -F c -b -v -f /home/openbravo/dump26072018.dmp openbravo
Output:
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: schema with OID 67046 does not exist
pg_dump: *** aborted because of error
Can anyone guide me how to sort this issue?
Update:
I followed this tutorial
http://www.aukema.org/2011/06/fixing-complex-corruption-in-my-dna.html
And I can see there are objects without a schemaname in the pg_tables.
But I don't know how to update those missing schemanames in the pg_tables. The last part of the tutorial is not quite explanatory. Hope someone can throw some light.
Finally found a way to take the backup by excluding the corrupted table as follows
pg_dump --exclude-table=ad_context_info -h localhost -p 5932 -U postgres > dumpsabnew.dmp
I try to learn how to import a database into PostgreSQL.
My attempt failed, where I use pg_restore -C ... -d ... to combine creation of a new database and importing from the file to the new database in one command:
$ sudo -u postgres pg_restore -C -d postgres dvdrental.tar
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2194; 1262 17355 DATABASE pagila postgres
pg_restore: [archiver (db)] could not execute query: ERROR: invalid locale name: "English_United States.1252"
Command was: CREATE DATABASE pagila WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United States.1252';
pg_restore: [archiver (db)] could not execute query: ERROR: database "pagila" does not exist
Command was: ALTER DATABASE pagila OWNER TO postgres;
pg_restore: [archiver (db)] could not reconnect to database: FATAL: database "pagila" does not exist
My questions are:
I was wondering why my first attempt failed?
Is there a way by running just one command?
The document says:
-C
--create
Create the database before restoring into it. If --clean is also specified, drop and recreate the target database before
connecting to it.
and even provide an example:
Assume we have dumped a database called mydb into a custom-format
dump file:
$ pg_dump -Fc mydb > db.dump
To drop the database and recreate it from the dump:
$ dropdb mydb
$ pg_restore -C -d postgres db.dump
The database named in the -d switch can be any database existing in
the cluster; pg_restore only uses it to issue the CREATE DATABASE
command for mydb . With -C , data is always restored into the
database name that appears in the dump file.
Thanks.
Update:
Thanks again. I found the binary file toc.dat in the dump contains the
CREATE DATABASE pagila WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United States.1252';`
Is it possible modify it to make sudo -u postgres pg_restore -C -d postgres dvdrental.tar work?
Note that I have a working soluton:
Separate creation of a new database in psql and importing from the file to the new database:
template1=# CREATE DATABASE dvdrental;
CREATE DATABASE
$ sudo -u postgres pg_restore -d dvdrental dvdrental.tar
$
Your import failed because the operating system where you are trying to import the dump does not know the locale English_United States.1252. Either the operating system is not Windows, or the Windows doesn't have that locale installed.
pg_restore -C tries to create the database in the same way as it was on the original system, which is not possible. Explicitly creating the database works because it uses a locale that exists.
There is no way to make that CREATE DATABASE command succeed except to have the locale present, so if that is not possible, you have to run two commands to restore the dump.
I need to be able to somehow get a set of tables from my dev db into my production db. I've just been creating a dump file from the dev db and using pg_restore on the production db. The problem now is that I need to preserve one table(called users) on the production db while replacing the others
I think I have the dump properly from this command
pg_dump -Fc --no-acl --no-owner -h localhost -U <USER> --exclude-table=users* --data-only <DB NAME> > test.dump
But I can't get the restore part to work. I tried the following command
pg_restore -Fc --no-acl --no-owner -h <PROD HOST> -U <USER> -d <DB NAME> -p <PORT> <FILE LOCATION>
BUt I get the following errors
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2009; 0 121384 TABLE DATA idx_descs Jason
pg_restore: [archiver (db)] COPY failed for table "idx_descs": ERROR: duplicate key value violates unique constraint "idx_descs_pkey"
DETAIL: Key (id)=(6) already exists.
CONTEXT: COPY idx_descs, line 1
It seems like for the tables I'm trying to overwrite, it is just trying to append the data and running into trouble because there are now duplicate primary keys. Any Ideas how to do this? Thanks
So you need to reassign primary keys?
You could try restoring to a temporary table (say for instance, in failing case: idx_desc_temp), then doing something like:
with t as ( select * from idx_descs_temp )
insert into idx_descs
select id + 100000 [or whatever], [other fields] from t;
Afterwards you need to reset sequences (if applicable -- fill in sequence name....):
select setval( 'idx_descs_id_seq'::regclass, 100000 + [suitable increment]);
If you have a large # of tables you could try to automate using the system catalog.
Note though that you also have to renumber foreign key refs. Possibly less pain would be to move data in production db first. If you are using an ORM, you could also automate via application APIs.