I'd like to use pg_dump to backup postgres database content. I only want to ignore one specific table containing cached data of several hundred GB.
How could I achieve this with pg_dump?
According to the docs, there is an option to --exclude-table which excludes tables from the dump by matching on a pattern (i.e. it allows wildcards):
-T table
--exclude-table=table Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for
-t. -T can be given more than once to exclude tables matching any of several patterns.
When both -t and -T are given, the behavior is to dump just the tables
that match at least one -t switch but no -T switches. If -T appears
without -t, then tables matching -T are excluded from what is
otherwise a normal dump.
There are a few examples here.
You can also add the same in the script
#!/bin/bash
CKUPNUM=3
BACKUPDIR=/home/utrade/dbbackup
DBBACKUP_FILENAME=Database_dump.sql
TARFILE=Database_dump_$(date +%d%h%y).tgz
#####Variables Set
DBUSER=mutrade
DBPASSWD=utrade123
DBNAME=mutradedb
cd $BACKUPDIR
export PGPASSWORD=$DBPASSWD
/usr/pgsql-11/bin/pg_dump -f $DBBACKUP_FILENAME $DBNAME --exclude-table-data='appmaster.ohlc_*' -U $DBUSER
tar czf $TARFILE $DBBACKUP_FILENAME
rm -f $DBBACKUP_FILENAME
#removing old/Extra backups
backups_count=`ls -lrt | wc -l`
if [[ $backups_count -gt $BACKUPNUM ]]
then
find $BACKUPDIR -mtime +30 -type f -delete
fi
Related
I have a Postgresql dump (created with pg_dump, custom compressed format). I would like to pg_restore it onto another server except a few large tables. I have tried using -l option and remove the tables not needed from the list as shown below. Is there an effective solution as am not sure how efficient the below is.
pg_restore -l dumpfile.dmp > list.txt
egrep -v "logtable|summarytable|historytable" list.txt > listex.txt
pg_restore -Fc -v -p 5432 -d prism --use-list=listex.txt dumpfile.dmp 2>> error1.out &
I want to get an export of my Heroku application's Postgres database, however I want to exclude one table. Is this possible?
Here is the command I use to export my entire Postgres database:
$ PGUSER=my_username PGPASSWORD=my_password heroku pg:pull DATABASE_URL my-application-name`
Maybe there is a way to exclude one table, or specify a list of tables to include?
In normal pg dump command you can specify the tables to include with -t option and exclude tables with -T option.
Can you try this :
$ PGPASSWORD=mypassword pg_dump -Fc --no-acl --no-owner -T *table you want to exclude* -h localhost -U myuser mydb > mydb.dump
Here is the document copied from postgreql official document.
-T table
--exclude-table=table
Do not dump any tables matching the table pattern. The pattern is interpreted according to the same rules as for -t. -T can be given more than once to exclude tables matching any of several patterns.
When both -t and -T are given, the behavior is to dump just the tables that match at least one -t switch but no -T switches. If -T appears without -t, then tables matching -T are excluded from what is otherwise a normal dump.
here is link for your reference
http://www.postgresql.org/docs/9.1/static/app-pgdump.html
I'm trying to create a PostgreSQL backup script using this answer as the basis of my script. The script is:
#! /bin/bash
# backup-postgresql.sh
# by Craig Sanders
# this script is public domain. feel free to use or modify as you like.
DUMPALL="/usr/bin/pg_dumpall"
PGDUMP="/usr/bin/pg_dump"
PSQL="/usr/bin/psql"
# directory to save backups in, must be rwx by postgres user
BASE_DIR="/var/backups/postgres"
YMD=$(date "+%Y-%m-%d")
DIR="$BASE_DIR/$YMD"
mkdir -p $DIR
cd $DIR
# get list of databases in system , exclude the tempate dbs
DBS=$($PSQL -l -t | egrep -v 'template[01]' | awk '{print $1}')
# first dump entire postgres database, including pg_shadow etc.
$DUMPALL -D | gzip -9 > "$DIR/db.out.gz"
# next dump globals (roles and tablespaces) only
$DUMPALL -g | gzip -9 > "$DIR/globals.gz"
# now loop through each individual database and backup the schema and data separately
for database in $DBS; do
SCHEMA=$DIR/$database.schema.gz
DATA=$DIR/$database.data.gz
# export data from postgres databases to plain text
$PGDUMP -C -c -s $database | gzip -9 > $SCHEMA
# dump data
$PGDUMP -a $database | gzip -9 > $DATA
done
The line:
$DUMPALL -D | gzip -9 > "$DIR/db.out.gz"
is returning this error:
psql: FATAL: role "root" does not exist
/usr/lib/postgresql/9.3/bin/pg_dumpall: invalid option -- 'D'
When I look at the PostgreSQL docs, there doesn't seem to be a -D option anymore. What should the updated command look like?
This is the modified script I ended up using to periodically backup my PostgreSQL database:
#! /bin/bash
# backup-postgresql.sh
# by Craig Sanders
# this script is public domain. feel free to use or modify as you like.
DUMPALL="/usr/bin/pg_dumpall"
PGDUMP="/usr/bin/pg_dump"
PSQL="/usr/bin/psql"
# directory to save backups in, must be rwx by postgres user
BASE_DIR="/var/backups/postgres"
YMD=$(date "+%Y-%m-%d")
DIR="$BASE_DIR/$YMD"
mkdir -p $DIR
cd $DIR
# get list of databases in system , exclude the tempate dbs
DBS=$($PSQL -l -t | egrep -v 'template[01]' | awk '{print $1}' | egrep -v '^\|' | egrep -v '^$')
# first dump entire postgres database, including pg_shadow etc.
$DUMPALL -c -f "$DIR/db.out"
# next dump globals (roles and tablespaces) only
$DUMPALL -g -f "$DIR/globals"
# now loop through each individual database and backup the schema and data separately
for database in $DBS; do
SCHEMA=$DIR/$database.schema
DATA=$DIR/$database.data
# export data from postgres databases to plain text
$PGDUMP -C -c -s $database -f $SCHEMA
# dump data
$PGDUMP -a $database -f $DATA
done
# delete backup files older than 30 days
OLD=$(find $BASE_DIR -type d -mtime +30)
if [ -n "$OLD" ] ; then
echo deleting old backup files: $OLD
echo $OLD | xargs rm -rfv
fi
I have many .sql files in a folder (/home/myHSH/scripts) in linux debian. I want to know the command to execute or run all sql files inside the folder into postgreSQL v9.1 database.
PostgreSQL informations:
Database name=coolDB
User name=coolUser
Nice to have: if you know how to execute multiple sql files through GUI tools too like pgAdmin3.
From your command line, assuming you're using either Bash or ZSH (in short, anything but csh/tcsh):
for f in *.sql;
do
psql coolDB coolUser -f "$f"
done
The find command combined with -exec or xargs can make this really easy.
If you want to execute psql once per file, you can use the exec command like this
find . -iname "*.sql" -exec psql -U username -d databasename -q -f {} \;
-exec will execute the command once per result.
The psql command allows you to specify multiple files by calling each file with a new -f argument. e.g. you could build a command such as
psql -U username -d databasename -q -f file1 -f file2
This can be accomplished by piping the result of the find to an xargs command once to format the files with the -f argument and then again to execute the command itself.
find . -iname "*.sql" | xargs printf -- ' -f %s' | xargs -t psql -U username -d databasename -q
I have a database with hundreds of tables, what I need to do is export specified tables and insert statements for the data to one sql file.
The only statement I know can achieve this is
pg_dump -D -a -t zones_seq interway > /tmp/zones_seq.sql
Should I run this statement for each and every table or is there a way to run a similar statement to export all selected tables into one big sql big. The pg_dump above does not export the table schema only inserts, I need both
Any help will be appreciated.
Right from the manual: "Multiple tables can be selected by writing multiple -t switches"
So you need to list all of your tables
pg_dump --column-inserts -a -t zones_seq -t interway -t table_3 ... > /tmp/zones_seq.sql
Note that if you have several table with the same prefix (or suffix) you can also use wildcards to select them with the -t parameter:
"Also, the table parameter is interpreted as a pattern according to the same rules used by psql's \d commands"
If those specific tables match a particular pattern, you can use that with the -t option in pg_dump.
pg_dump -D -a -t zones_seq -t interway -t "<pattern>" -f /tmp/zones_seq.sql <DBNAME>
For example to dump tables which start with "test", you can use
pg_dump -D -a -t zones_seq -t interway -t "^test*" -f /tmp/zones_seq.sql <DBNAME>