pg_restore for 9.4 fails with error 'could not execute query' - postgresql

The PostgreSQL version I am trying to restore is 9.4.10. I backed up a database from the same version. The command I am using for restore is:
/opt/PostgreSQL/9.4/bin/pg_restore -U postgres --port=5432 -v --dbname=mydb < /backups/309646/WAL/pipe_309646
The error I get is:
pg_restore: executing BLOB 71197
pg_restore: [archiver (db)] Error from TOC entry 3822; 2613 71197 BLOB 71197 user
pg_restore: [archiver (db)] could not execute query: ERROR: duplicate key value violates unique constraint "pg_largeobject_metadata_oid_index"
DETAIL: Key (oid)=(71197) already exists.
Command was: SELECT pg_catalog.lo_create('71197');
This is repeated in the pg errors some 112 times. I need to restore to the same database. Here is the command I used for dumping:
/opt/PostgreSQL/9.4/bin/pg_dump" -U postgres -Fc -b --port=5432 'mydb' -t public.users > /backups/309646/WAL/pipe_309646
Any indication as to why this is happening? How can I mitigate this error?

You are trying to restore a large object into a database that already contains a large object with the same oid.
Use a new database that does not contain any large objects yet as target for the restore.
Alternatively, drop the large objects first with
SELECT lo_unlink(oid) FROM pg_largeobject_metadata;

Related

How do I dump a Google Cloud SQL for PostgreSQL DB to import back into a regular PostgreSQL DB?

I am trying to export my data from a Google Cloud SQL (PostgreSQL) instance in order to import it into a regular Postgres DB using pg_dump and pg_restore:
pg_dump -h sql_proxy -F t --no-owner --no-acl > backup.tar
pg_restore backup.tar -c
However, when running pg_restore I get these errors:
pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore:
[archiver (db)] Error from TOC entry 197; 1259 17010 TABLE xxx
postgres pg_restore: [archiver (db)] could not execute query: ERROR:
role "postgres" does not exist
Command was: ALTER TABLE public.xxx OWNER TO postgres;
pg_restore: [archiver (db)] Error from TOC entry 198; 1259 17017 TABLE
xxy postgres pg_restore: [archiver (db)] could not execute query:
ERROR: role "postgres" does not exist
Command was: ALTER TABLE public.xxy OWNER TO postgres;
...
I tried a few variations of flags with no luck. I found many articles on how to migrate the other way around (from PostgreSQL to Google Cloud SQL for PostgreSQL) and the Google Cloud docs only describe how to export data to be imported into a Cloud SQL DB again.
I would appreciate any help on how to avoid the errors above and how to migrate the DB with as little changes as possible.
You need to have the roles that are referenced already pre-created in the instance where you want to import the dump.
There are two ways to achieve that:
use pg_dumpall instead of pg_dump or
pg_dumpall --globals-only and then restore that dump (this will create the roles among other things)

pg_restore WARNING: errors ignored on restore: 62

I was given a database file , i don't know the userid who dumped it or it's privileges .
I use postgresql 9.6.7-1 with pg_admin4(v3.0) , OS: windows 10
First, i created a database in pgadmin with same name as the given file.
I used the restore option to restore the file but after some seconds
i got type of messages like :
pg_restore: executing SEQUENCE SET xxxx
pg_restore: [archiver (db)] Error from TOC entry 4309; 0 0 SEQUENCE SET xxxx postgres
pg_restore: [archiver (db)] could not execute query: ERROR: relation "public.xxxx" does not exist
LINE 1: SELECT pg_catalog.setval('public.xxxx', 1, false);
^
Command was: SELECT pg_catalog.setval('public.xxxx', 1, false);
and below all, the warning :
"WARNING: errors ignored on restore: 62"
comparing with other's answers i dont even get a single bit of data restored.
I have tried also with
pg_restore
command but i get the same result .
It looks like the dump you were given is not a complete backup. It only has the data, not the object definitions. That is, it was created by pg_dump using -a, --data-only, or --section=data.
Unless you already know what the object definitions are from some other source (e.g., an existing database server with the same schema definitions, or a dump file generated with pg_dump -s), you will have a hard time loading this data.
{solved}
please check the version of the Postgresql pgAdmin4 and the version of ".Sql" file you want to import into your database ,and instead of restoring at public you should directly restore at the database itself.
2:- drop the database and again re-create the fresh database(name='xyz') & at the same database ('xyz'** ->RightClick->Restore->Filename->'select(formate ".sql")->Restore**) than->refresh the database.
and by doing that it'll create i new Schema just above public Schema..

postgres restore reporting spurious primary key issue

I'm trying to restore a Postgres database, with the following command:
pg_restore --verbose -h localhost -p 5436 -d my_database --format=c --clean --no-owner --no-privileges --no-tablespaces --jobs=1 Sunday2.backup
During the restore, I see this error:
pg_restore: [archiver (db)] Error from TOC entry 2540; 0 16531 TABLE DATA foo_searchresult uoj9bg6vn4cqm
pg_restore: [archiver (db)] COPY failed for table "foo_searchresult": ERROR: duplicate key value violates unique constraint "foo_searchresult_pkey"
DETAIL: Key (id)=(63) already exists.
CONTEXT: COPY foo_searchresult, line 1
I went back to the source database and ran this:
select id, count(*)
from foo_searchresult
group by id
having count(*) > 1
and got nothing.
Now if I just try to restore that one table to a brand-new database:
pg_restore --verbose -h localhost -p 5436 -d brand_new_database --format=c --clean --no-owner --no-privileges --no-tablespaces --jobs=1 -t foo_searchresult Sunday2.backup
it comes back clean.
UPDATE: I just tried restoring the ENTIRE backup to a brand-new database, and it seems to have made it past foo_searchresult without issue.
(Incidentally, the source database is 9.4, and the target database is 9.5, but I get the same results using the pg_restore from a 9.4 or 9.5 distribution.)
UPDATE: So it seems that dropping the database, then creating an empty one and re-loading that (rather than using the --clean flag) resolved a whole multitude of issues
Anyway, my question was "Has anyone seen this before, or have any idea how to fix it."

postgresql how to backup and overwrite specific tables

I need to be able to somehow get a set of tables from my dev db into my production db. I've just been creating a dump file from the dev db and using pg_restore on the production db. The problem now is that I need to preserve one table(called users) on the production db while replacing the others
I think I have the dump properly from this command
pg_dump -Fc --no-acl --no-owner -h localhost -U <USER> --exclude-table=users* --data-only <DB NAME> > test.dump
But I can't get the restore part to work. I tried the following command
pg_restore -Fc --no-acl --no-owner -h <PROD HOST> -U <USER> -d <DB NAME> -p <PORT> <FILE LOCATION>
BUt I get the following errors
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2009; 0 121384 TABLE DATA idx_descs Jason
pg_restore: [archiver (db)] COPY failed for table "idx_descs": ERROR: duplicate key value violates unique constraint "idx_descs_pkey"
DETAIL: Key (id)=(6) already exists.
CONTEXT: COPY idx_descs, line 1
It seems like for the tables I'm trying to overwrite, it is just trying to append the data and running into trouble because there are now duplicate primary keys. Any Ideas how to do this? Thanks
So you need to reassign primary keys?
You could try restoring to a temporary table (say for instance, in failing case: idx_desc_temp), then doing something like:
with t as ( select * from idx_descs_temp )
insert into idx_descs
select id + 100000 [or whatever], [other fields] from t;
Afterwards you need to reset sequences (if applicable -- fill in sequence name....):
select setval( 'idx_descs_id_seq'::regclass, 100000 + [suitable increment]);
If you have a large # of tables you could try to automate using the system catalog.
Note though that you also have to renumber foreign key refs. Possibly less pain would be to move data in production db first. If you are using an ORM, you could also automate via application APIs.

PostgreSQL backup and restore

I'm having trouble restoring a PostgreSQL backup.
On my own computer I dump a table using the custom backup format.
On the server, I try to restore the file but it gives me several errors:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 247; 1259 147321 TABLE county postgres
pg_restore: [archiver (db)] could not execute query: ERROR: type "geometry" does not exist
This is what I've done on the server:
pSql -U postgres
/c justice
CREATE EXTENSION postgis;
pg_restore -p 5432 -U postgres -d justice data_import
I'm running PostgreSQL 9.3.2 on both computers. However I've got PostGIS 2.1.1 on my local computer and 2.0.4 on the server. Could that be a problem?
Why is my PostGIS extension not working on the server?
When I do not specify the database it seems to work. However where does the data go? It also writes out all the SQL statements to the screen.
pg_restore -p 5432 -U postgres public_county
PostgreSQL database dump complete
When I try to verify the installation of PostGIS it gives me an error. I think the extension might be improperly installed:
select PostGIS_full_version();
ERROR: function postgis_full_version() does not exist