Is there any way to exclude some partitioned tables while dumping data using pg_dump.exe in windows command line.
I have tried following flags of pg_dump to exclude but its not working
-T schema_name.table.*
I got the solution. As all my partitions starting with "log_history" .so I have added flag -T schema_name.log_history* to exclude them while dumping data.
Related
I am trying to keep backups of almost everything from a DB2 version 7, including Tables, Views, Triggers and Indexes. I am currently experimenting with db2look and db2move commands as shown below, but I am able to keep only the DDL and the Tables.
Command to keep the DDL
db2look -d CC_DEV -e -a -o CC_DEV_DDL.sql
Command to keep the Tables with their data
db2move CC_DEV export
Is this enough to keep what I need in case of a disaster? Do I need anything else?
I have a PostgreQL DB that is about 6TB. I want to transfer this database to another server using for example pg_dumpall. The problem I have is that I only have a 1TB HD. How can I do to copy this database to the other new server that has enough space? Let's suppose I can not get another HD. Is there the possibility to do partial backup files, upload them to the new server, erase the HD and get another batch of backup files until the transfer is complete?
This works here(proof of concept):
shell-command issued from the receiving side
remote side dumps through the network-connection
local side psql just accepts the commands from this connection
the data is never stored in a physical file
(for brevity, I only sent the table definitions, not the actual data: --schema-only)
you could have some problems with users and tablespaces (these are global for an installation in Postgres) pg_dumpall will dump+restore these, too, IIRC.
#!/bin/bash
remote=10.224.60.103
dbname=myremotedbname
pg_dump -h ${remote} --schema-only -c -C ${dbname} | psql
#eof
As suggested above if you have a fast network connection between source and destination you can do it without any extra disk.
However for a 6 TB DB (which includes indexes I assume) using the archive dump format (-Fc) could yield a database dump of less than 1 TB.
Regarding the "by parts" question: yes, it possible using the table pattern (-t, --table):
pg_dump -t TABLE_NAME ...
You can also exclude tables using -T, --exclude-table:
pg_dump -T TABLE_NAME ...
The above options (-t , -T) can be specified multiple times and can be even combined.
They also support patterns for specifying the tables:
pg_dump -t 'employee_*' ...
I read these docs:
Description
pg_restore is a utility for restoring a PostgreSQL database from an
archive created by pg_dump in one of the non-plain-text formats. It
will issue the commands necessary to reconstruct the database to the
state it was in at the time it was saved. The archive files also allow
pg_restore to be selective about what is restored, or even to reorder
the items prior to being restored. The archive files are designed to
be portable across architectures.
pg_restore can operate in two modes. If a database name is specified,
pg_restore connects to that database and restores archive contents
directly into the database. Otherwise, a script containing the SQL
commands necessary to rebuild the database is created and written to a
file or standard output. This script output is equivalent to the plain
text output format of pg_dump. Some of the options controlling the
output are therefore analogous to pg_dump options.
Obviously, pg_restore cannot restore information that is not present
in the archive file. For instance, if the archive was made using the
"dump data as INSERT commands" option, pg_restore will not be able to
load the data using COPY statements.
but it's still unclear to me if pg_restore just loads database data or if it also creates the structure of the database too.
It depends on the options you pass, and obviously in the stored information in the dump. If you keep reading through the documentation you will see this option:
--data-only
Restore only the data, not the schema (data definitions). Table data, large objects, and sequence values are restored, if
present in the archive.
This option is similar to, but for historical reasons not identical to, specifying --section=data.
That is obviously allowing you to restore only the schema but no the data.
I have a database containing a very large table including binary data which I want to update on a remote machine, once a day. Rather than dumping the entire table, transferring and recreating it on the remote machine, I want to dump only the updates, then use that dump to update the remote machine.
I already understand that I can produce the dump file as such.
mysqldump -u user -p=pass --quick --result-file=dump_file \
--where "Updated >= one_day_ago" \
database_name table_name
1) Does the resulting "restore" on the remote machine
mysql -u user -p=pass database_name < dump_file
only update the necessary rows? Or are there other adverse effects?
2) Is this the best way to do this? (I am unable to pipe to the server directly and using --host option)
If you only dump rows where the WHERE clause is true, then you will only get a .sql file that contains the values you want to update. So you will never get duplicate values if you use the current export options. However, inserting these into a database will not work. You will have to use the commandline parameter --replace, otherwise, if you dump your database and a row with id 6 in table table1 and try to import this into your other database, you'll get an error on duplicates if a row already has id = 6. Using the --replace parameter, it will overwrite older values, which can only happen if there is a new one (according to your WHERE clause).
So to quickly answer:
Yes, this will restore on the remote machine, but only if you saved using --replace (this will restore the latest version of the file you have)
I am not entirely sure if you can pipe backups. According to this website, you can, but I have never tried it before.
I need, as a one-off, to copy data from one table in a PostgreSQL database to the corresponding table in a different database. There's not that much data: about 2500 rows, 8 columns (some numeric, some varchar).
My first thought was to simply pg_dump -a -t table -f output.file and then pg_restore on another database. However, as it turned out, the versions of pg_dump and the source server do not match - and I have no control over versions, so upgrading is not an option:
pg_dump: server version: 9.1.2; pg_dump version: 9.0.5
pg_dump: aborting because of server version mismatch
Unfortunately, with version 9 of Postgres, option -i (ignore version) is not longer available. I do know what I am doing, but it still wouldn't let me (naturally).
What other options do I have?
I would use COPY TO and COPY FROM. Works in either of the versions and is the optimal tool for this.
If you want to use pg_dump, you have to use the appropriate version. There are separate executables for each version. On Linux you can get the path of the currently used executable with which pg_dump.