I have a Postgresql dump (created with pg_dump, custom compressed format). I would like to pg_restore it onto another server except a few large tables. I have tried using -l option and remove the tables not needed from the list as shown below. Is there an effective solution as am not sure how efficient the below is.
pg_restore -l dumpfile.dmp > list.txt
egrep -v "logtable|summarytable|historytable" list.txt > listex.txt
pg_restore -Fc -v -p 5432 -d prism --use-list=listex.txt dumpfile.dmp 2>> error1.out &
Related
I'm using the following command to backup my database (PostgreSQL 11.8):
pg_basebackup -D "C:\\temp" -F tar -X f -z -P -U myUser
And the following to restore:
I manually unpack the base.tar.gz => base.tar
pg_restore -h localhost -W -U myUser -c -C -d myDatabase -F tar -v "C:\\temp\\base.tar"
This results in the following error:
pg_restore: [tar archiver] could not find header for file "toc.dat" in tar archive
What am I doing wrong?
Also, I tried different versions of the restore (only data, etc.) but of course the missing header file issue persists.
Thanks for your help!
You cannot use pg_basebackup and pg_restore together:
pg_basebackup is a physical backup tool
pg_restore can only be used with a logical backup created by pg_dump.
There is no single PostgreSQL command to restore a backup created with pg_basebackup.
To restore a physical backup see https://www.postgresql.org/docs/12/continuous-archiving.html#BACKUP-PITR-RECOVERY
I'm running a PostgreSQL DB and want to compare the schema defined in schema.sql with the schema that is created on the DB on a regular basis. I.e. to verify that no manual changes on the remote are applied or that schema.sql actually reflects what's running on the DB.
What I'm currently doing is roughly:
create a dump from schema.sql using pg_virtualenv:
$ pg_virtualenv sh -c 'psql -U $PGDATABASE -f schema.sql
$ pg_dump --schema-only --no-comments --no-owner --no-privileges --schema=public -U $PGDATABASE -f schema_local.sql'
create a dump from the remote
$ pg_dump --schema-only --no-comments --no-owner --no-privileges --schema=public -U $PGREMOTE -f schema_upstream.sql
strip out comments, SET, emtpy lines, etc. and diff them
$ diff -u <(grep -Ev '^--|^SET|^$$' schema_local.sql) <(grep -Ev '^--|^SET|^$$' schema_upstream.sql)
That works reasonably well and I can easily spot schema changes in the diff if they occur. But the solution seems a bit hacky and I wonder if there's a better way to achieve the same thing?
I have a database dump that I generated with pg_dump, like so:
pg_dump -C remote_db -a --no-owner -t my_table > dump.sql
and I'm looking to copy a single table over from it into my local database, with data only (not schema) and with no ownership settings. I'm familiar with how to do it directly from another db using pg_dump, something like:
pg_dump -C remote_db -a --no-owner -t my_table | psql local_db
But I'm not sure how to replicate the same effect from a file.
I've tried something like:
pg_restore -d local_db -a --no-owner -t my_table dump.sql
But got an error:
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.
I'm not sure how to use psql to achieve the same thing. Help would be appreciated.
I want to get an export of my Heroku application's Postgres database, however I want to exclude one table. Is this possible?
Here is the command I use to export my entire Postgres database:
$ PGUSER=my_username PGPASSWORD=my_password heroku pg:pull DATABASE_URL my-application-name`
Maybe there is a way to exclude one table, or specify a list of tables to include?
In normal pg dump command you can specify the tables to include with -t option and exclude tables with -T option.
Can you try this :
$ PGPASSWORD=mypassword pg_dump -Fc --no-acl --no-owner -T *table you want to exclude* -h localhost -U myuser mydb > mydb.dump
Here is the document copied from postgreql official document.
-T table
--exclude-table=table
Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for -t. -T can be given more than once to exclude tables matching any of several patterns.
When both -t and -T are given, the behavior is to dump just the tables that match at least one -t switch but no -T switches. If -T appears without -t, then tables matching -T are excluded from what is otherwise a normal dump.
here is link for your reference
http://www.postgresql.org/docs/9.1/static/app-pgdump.html
This a first time foray in PostgreSQL backups (db dumps) and I've been researching the different pgdump formats, other pgdump options, and pgdumpall. For a Postgres beginner looking at taking an hourly dump (will overwrite previous dump) of two databases that contain table triggers and two different schemas in each db, what would be the backup format and options to easily achieve the following:
Small file size (single file per db or ability to choose which db to restore)
Easy to restore as clean db (with & without same db name[s])
Easy to restore on different server (user maybe different)
Triggers are disabled on restore and re-enabled after restore.
Include example commands to backup and restore.
Any other helpful pgdump/pgrestore suggestions welcome.
This command will create a small dmp file which includes only structure of the dattabase - tabels, columns, triggers, views etc.. (This command will just take few minutes)
pg_dump -U "dbuser" -h "host" -p "port" -F c -b -v -f ob_`date +%Y%m%d`.dmp dbname
**ex:** pg_dump -U thames -h localhost -p 5432 -F c -b -v -f ob_`date +%Y%m%d`.dmp dbname
This command will take the backup of complete database
pg_dump -h localhost -U "dbuser" "dbname" -Fc > "pathfilename.backup"
**ex:** pg_dump -h localhost -U thames thamesdb - Fc > "thamesdb.backup"
and for restore you can use:
pg_restore -i -h localhost -U "user" -d "dbname" -v "dbname.backup"
**ex:** pg_restore -i -h localhost -U thames -d thamesdb -v "thamesdb.backup"
to take backup of selected tabels(uses regular expressions) here
pg_dump -t '(A|B|C)'
for full details you can visit pgdump help page there are many options out there
If you want to take your backups hourly, I would think you should be using log archiving instead of pg_dump.