pg_restore WARNING: errors ignored on restore: 62 - postgresql

I was given a database file , i don't know the userid who dumped it or it's privileges .
I use postgresql 9.6.7-1 with pg_admin4(v3.0) , OS: windows 10
First, i created a database in pgadmin with same name as the given file.
I used the restore option to restore the file but after some seconds
i got type of messages like :
pg_restore: executing SEQUENCE SET xxxx
pg_restore: [archiver (db)] Error from TOC entry 4309; 0 0 SEQUENCE SET xxxx postgres
pg_restore: [archiver (db)] could not execute query: ERROR: relation "public.xxxx" does not exist
LINE 1: SELECT pg_catalog.setval('public.xxxx', 1, false);
^
Command was: SELECT pg_catalog.setval('public.xxxx', 1, false);
and below all, the warning :
"WARNING: errors ignored on restore: 62"
comparing with other's answers i dont even get a single bit of data restored.
I have tried also with
pg_restore
command but i get the same result .

It looks like the dump you were given is not a complete backup. It only has the data, not the object definitions. That is, it was created by pg_dump using -a, --data-only, or --section=data.
Unless you already know what the object definitions are from some other source (e.g., an existing database server with the same schema definitions, or a dump file generated with pg_dump -s), you will have a hard time loading this data.

{solved}
please check the version of the Postgresql pgAdmin4 and the version of ".Sql" file you want to import into your database ,and instead of restoring at public you should directly restore at the database itself.
2:- drop the database and again re-create the fresh database(name='xyz') & at the same database ('xyz'** ->RightClick->Restore->Filename->'select(formate ".sql")->Restore**) than->refresh the database.
and by doing that it'll create i new Schema just above public Schema..

Related

Postgres CloudSQL Migration: ERROR: permission denied to set parameter "log_min_duration_statement"

I am trying to use the Database Migration Service to migrate an existing database into CloudSQL.
When I start the migration, I receive the following error:
finished setup replication with errors: [api_production]: error importing schema: failed to restore schema: stderr=pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 3997; 0 0 DATABASE PROPERTIES api_production postgres pg_restore: error: could not execute query: ERROR: permission denied to set parameter "log_min_duration_statement" Command was: ALTER DATABASE api_production SET log_min_duration_statement TO '500ms'; pg_restore: warning: errors ignored on restore: 1 , stdout=
How can I continue the migration, ignoring the SET PARAMETER statement?
I have finally been able to start the migration by resetting the parameters on the source database (setting them to match was insufficient).
This can be done from a postgres console on the source database:
reset log_min_duration_statement;
ALTER DATABASE <database_name> RESET log_min_duration_statement;
ALTER DATABASE postgres RESET log_min_duration_statement;
The Database Migration Service is a managed Google Cloud service, it has restricted access to certain system procedures and tables that require advanced privileges.The 'postgres' user is the most privileged user available in Cloud SQL, but it is not a Postgres superadmin. See public docs for more information about PostgreSQL Users.
There are some other parameters that you could run the "ALTER Database" command to change; however, "log_statement" and "log_min_duration_statement" are unfortunately not examples of these parameters.
The PostgreSQL documentation also documents this in particular "Certain variables cannot be set this way, or can only be set by a superuser."
However, you can change the particular setting in Console via the Flags on the database edit screen and remove these statements from the Migration job,to avoid these failure errors.
Please refer to the documentation to know more about configuring database flags.

Dump broken Postgres database

I have database, it's work, but has some problems. I need to migrate database to new Version Postgres, so when I try to make dump with pg_dump or pg_dumpall I got somethink like this:
pg_dump: [archiver (db)] query failed: ERROR: unexpected chunk number 2 (expected 0) for toast value 78482 in pg_toast_2618
pg_dump: [archiver (db)] query was: SELECT pg_catalog.pg_get_viewdef('78478'::pg_catalog.oid) AS viewdef
But, if I make dump only one separate table it works.
I want to make dump by piecemeal. I already got structure of all tables, script for create actual indexes. When I made pg_dumpall of other normal database, I saw in dump-file something like:
ALTER TABLE ONLY schema_name.table_name ALTER COLUMN id_column SET DEFAULT nextval('sequence_name'::regclass);
I need write script which set sequence for each table, where I can to see matching between sequences and tables?
Someone has expirience in such migration? Which problems waits me later? There are special instruments for migration database postgres? Any diferent solutions?

pg_restore for 9.4 fails with error 'could not execute query'

The PostgreSQL version I am trying to restore is 9.4.10. I backed up a database from the same version. The command I am using for restore is:
/opt/PostgreSQL/9.4/bin/pg_restore -U postgres --port=5432 -v --dbname=mydb < /backups/309646/WAL/pipe_309646
The error I get is:
pg_restore: executing BLOB 71197
pg_restore: [archiver (db)] Error from TOC entry 3822; 2613 71197 BLOB 71197 user
pg_restore: [archiver (db)] could not execute query: ERROR: duplicate key value violates unique constraint "pg_largeobject_metadata_oid_index"
DETAIL: Key (oid)=(71197) already exists.
Command was: SELECT pg_catalog.lo_create('71197');
This is repeated in the pg errors some 112 times. I need to restore to the same database. Here is the command I used for dumping:
/opt/PostgreSQL/9.4/bin/pg_dump" -U postgres -Fc -b --port=5432 'mydb' -t public.users > /backups/309646/WAL/pipe_309646
Any indication as to why this is happening? How can I mitigate this error?
You are trying to restore a large object into a database that already contains a large object with the same oid.
Use a new database that does not contain any large objects yet as target for the restore.
Alternatively, drop the large objects first with
SELECT lo_unlink(oid) FROM pg_largeobject_metadata;

How migrate PostgreSQL database from virtual machine to production server?

i tried to migrate database from virtual machine to production server (both working on Ubuntu)
but i faced a lot of problems first when i tired to create backup file with that command
pg_dump mydb --file=db.dump --host=localhost --username=admin
this errors show up
pg_dump: [archiver (db)] query failed: ERROR: permission denied for schema topology
pg_dump: [archiver (db)] query was: LOCK TABLE topology.topology IN ACCESS SHARE MODE
then i tried this command and it goes well
pg_dump -Fc mydb > db.dump
and when i tried to restore db to production server i used this command(after creating an user and a database with that user)
psql -d mydb --file=db.dump
this error show up
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.
Then I use this command to restore it
pg_restore -d mydb db.dump
and it go well but when I run the server using this command
python manage.py runserver
This error shows up
return value.replace(b'\0', b'').decode()
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 1-2: invalid continuation byte
Try this as a superuser. Apparently "admin" is not a superuser.
Your first attempt is creating a plain-text backup, which you can restore using "psql". Your second attempt ("-Fc") creates a custom format backup, and you need "pg_restore" to restore it.
And don't forget to migrate the global objects (roles, tablespaces ect), using "pg_dumpall -g".

postgresql how to backup and overwrite specific tables

I need to be able to somehow get a set of tables from my dev db into my production db. I've just been creating a dump file from the dev db and using pg_restore on the production db. The problem now is that I need to preserve one table(called users) on the production db while replacing the others
I think I have the dump properly from this command
pg_dump -Fc --no-acl --no-owner -h localhost -U <USER> --exclude-table=users* --data-only <DB NAME> > test.dump
But I can't get the restore part to work. I tried the following command
pg_restore -Fc --no-acl --no-owner -h <PROD HOST> -U <USER> -d <DB NAME> -p <PORT> <FILE LOCATION>
BUt I get the following errors
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2009; 0 121384 TABLE DATA idx_descs Jason
pg_restore: [archiver (db)] COPY failed for table "idx_descs": ERROR: duplicate key value violates unique constraint "idx_descs_pkey"
DETAIL: Key (id)=(6) already exists.
CONTEXT: COPY idx_descs, line 1
It seems like for the tables I'm trying to overwrite, it is just trying to append the data and running into trouble because there are now duplicate primary keys. Any Ideas how to do this? Thanks
So you need to reassign primary keys?
You could try restoring to a temporary table (say for instance, in failing case: idx_desc_temp), then doing something like:
with t as ( select * from idx_descs_temp )
insert into idx_descs
select id + 100000 [or whatever], [other fields] from t;
Afterwards you need to reset sequences (if applicable -- fill in sequence name....):
select setval( 'idx_descs_id_seq'::regclass, 100000 + [suitable increment]);
If you have a large # of tables you could try to automate using the system catalog.
Note though that you also have to renumber foreign key refs. Possibly less pain would be to move data in production db first. If you are using an ORM, you could also automate via application APIs.