I have database, it's work, but has some problems. I need to migrate database to new Version Postgres, so when I try to make dump with pg_dump or pg_dumpall I got somethink like this:
pg_dump: [archiver (db)] query failed: ERROR: unexpected chunk number 2 (expected 0) for toast value 78482 in pg_toast_2618
pg_dump: [archiver (db)] query was: SELECT pg_catalog.pg_get_viewdef('78478'::pg_catalog.oid) AS viewdef
But, if I make dump only one separate table it works.
I want to make dump by piecemeal. I already got structure of all tables, script for create actual indexes. When I made pg_dumpall of other normal database, I saw in dump-file something like:
ALTER TABLE ONLY schema_name.table_name ALTER COLUMN id_column SET DEFAULT nextval('sequence_name'::regclass);
I need write script which set sequence for each table, where I can to see matching between sequences and tables?
Someone has expirience in such migration? Which problems waits me later? There are special instruments for migration database postgres? Any diferent solutions?
Related
I try dumping tables from a production environment to a dev one. However, when dumping and restoring this table, using the following command:
pg_restore --no-owner --no-acl --clean --if-exists -d database dump_file.dump
I get an error stating that I can't drop that table unless I use something like CASCADE (i.e. dropping all other tables that depend on that one). Is there a way to determine the tables to be dropped? is there a way of maybe state in the pg_dump command to dump the table I'm looking to dump and all related tables ?
Here's the error raised:
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 4066; 2606 30526 CONSTRAINT table1 pkey user
pg_restore: error: could not execute query: ERROR: cannot drop constraint pkey on table public.table1 because other objects depend on it
DETAIL: constraint id_fkey on table public.dag depends on index public.pkey
constraint id_fkey on table public.dag depends on index public.pkey
HINT: Use DROP ... CASCADE to drop the dependent objects too...
You have a table on the dev database that has a pkey that is dependent and therefore can not be dropped before the restore. This is proper behavior.
I am not seeing dumping/restoring a particular table. You are dumping/restoring the entire database.
If you want recreate the production database as a dev one then do:
pg_restore -C --no-owner --no-acl --clean --if-exists -d postgres dump_file.dump
The -C with --clean will DROP DATABASE db_name and then rebuild it from scratch by connecting to the database postgres to do the DROP/CREATE db_name and then connect to db_name to load the rest of the objects.
This is the best way to clean out cruft and start at a consistent state.
UPDATE
Update your question with the pg_dump command so it is evident what you are doing.
If you want to see whether a particular table has dependencies, in the original database use psql and do \d the_table to see what the dependencies to and from the table are. If you tell pg_dump to dump a single table it will dump just that table. It will not follow dependencies and dump those also. That is up to you to do.
Look into using a schema management tool to do your changes/migrations. I use Sqitch for this.
The PostgreSQL version I am trying to restore is 9.4.10. I backed up a database from the same version. The command I am using for restore is:
/opt/PostgreSQL/9.4/bin/pg_restore -U postgres --port=5432 -v --dbname=mydb < /backups/309646/WAL/pipe_309646
The error I get is:
pg_restore: executing BLOB 71197
pg_restore: [archiver (db)] Error from TOC entry 3822; 2613 71197 BLOB 71197 user
pg_restore: [archiver (db)] could not execute query: ERROR: duplicate key value violates unique constraint "pg_largeobject_metadata_oid_index"
DETAIL: Key (oid)=(71197) already exists.
Command was: SELECT pg_catalog.lo_create('71197');
This is repeated in the pg errors some 112 times. I need to restore to the same database. Here is the command I used for dumping:
/opt/PostgreSQL/9.4/bin/pg_dump" -U postgres -Fc -b --port=5432 'mydb' -t public.users > /backups/309646/WAL/pipe_309646
Any indication as to why this is happening? How can I mitigate this error?
You are trying to restore a large object into a database that already contains a large object with the same oid.
Use a new database that does not contain any large objects yet as target for the restore.
Alternatively, drop the large objects first with
SELECT lo_unlink(oid) FROM pg_largeobject_metadata;
I am writing a backup script for a PostgreSQL database. I want to execute the script by a cron job.
First I created a system user "backup"
In psql I executed the following statements:
CREATE USER backup;
GRANT CONNECT ON DATABASE confluence TO backup;
GRANT USAGE ON SCHEMA public TO backup;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO backup;
First of all was there any fatal error in above statements or is there a serious security issue?
Is it right that the user backup is a read-only user? That was what I read in a tutorial but to be honest I have no idea what SCHEMA / schemes are...
When executing pg_dump as user backup I get the following:
pg_dump: [archiver (db)] query failed: ERROR: permission denied for relation EVENTS
pg_dump: [archiver (db)] query was: LOCK TABLE public."EVENTS" IN ACCESS SHARE MODE
As I am a absolute noob to databases I want to ask you before I add more and more statements without knowing what I do...
I am running psql (PostgreSQL) 10.5 on Ubuntu 10.5-0ubuntu0.18.04
I was trying to upgrade Sentry and a table in my database got corrupt.
After reading about vacuum and reindex I was able to track down the issue to a single table.
Doing a select * from any other table works just fine, but this particular one seems to be problematic. Is there a way I can fix the table, or, worst case scenario, dump all other tables somehow?
pg_dump -T corrupt_table > bkp.sql doesn't work:
bash-4.4# pg_dump -U XXXXXX -T sentry_identityprovider sentry > bkp.sql
pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for attribute 1 of relation 45941
pg_dump: [archiver (db)] query was: SELECT tableoid, oid, conname, confrelid, pg_catalog.pg_get_constraintdef(oid) AS condef FROM pg_catalog.pg_constraint WHERE conrelid = '45954'::pg_catalog.oid AND contype = 'f'
Please avoid comments like "Well, go get your backups". I'm asking because I don't have a backup.
Also, please avoid comments like "Well, if you don't have backups, shit happens". I'm asking because there was an error in the execution of the backups and none were made.
Also, please avoid any other helpless comments related to backups. Really. You're not helping me that way.
At some stage I have been able to dump and restore an individual table as per below. Note in help for pg_dump that you should be able to do a full dump and exclude the corrupt table, never tried it. Not sure why yours fails, dumping a single good table might give the answer. Hope it works for you.
pg_dump -t good_table old_DB -U youruser -f good_table_BUP.sql
psql -f good_table_BUP.sql new_DB
-t, --table=TABLE dump the named table(s) only
-T, --exclude-table=TABLE do NOT dump the named table(s)
I was given a database file , i don't know the userid who dumped it or it's privileges .
I use postgresql 9.6.7-1 with pg_admin4(v3.0) , OS: windows 10
First, i created a database in pgadmin with same name as the given file.
I used the restore option to restore the file but after some seconds
i got type of messages like :
pg_restore: executing SEQUENCE SET xxxx
pg_restore: [archiver (db)] Error from TOC entry 4309; 0 0 SEQUENCE SET xxxx postgres
pg_restore: [archiver (db)] could not execute query: ERROR: relation "public.xxxx" does not exist
LINE 1: SELECT pg_catalog.setval('public.xxxx', 1, false);
^
Command was: SELECT pg_catalog.setval('public.xxxx', 1, false);
and below all, the warning :
"WARNING: errors ignored on restore: 62"
comparing with other's answers i dont even get a single bit of data restored.
I have tried also with
pg_restore
command but i get the same result .
It looks like the dump you were given is not a complete backup. It only has the data, not the object definitions. That is, it was created by pg_dump using -a, --data-only, or --section=data.
Unless you already know what the object definitions are from some other source (e.g., an existing database server with the same schema definitions, or a dump file generated with pg_dump -s), you will have a hard time loading this data.
{solved}
please check the version of the Postgresql pgAdmin4 and the version of ".Sql" file you want to import into your database ,and instead of restoring at public you should directly restore at the database itself.
2:- drop the database and again re-create the fresh database(name='xyz') & at the same database ('xyz'** ->RightClick->Restore->Filename->'select(formate ".sql")->Restore**) than->refresh the database.
and by doing that it'll create i new Schema just above public Schema..