A faster way to copy a postgresql database (or the best way) - postgresql

I did a pg_dump of a database and am now trying to install the resulting .sql file on to another server.
I'm using the following command.
psql -f databasedump.sql
I initiated the database install earlier today and now 7 hours later the database is still being populated. I don't know if this his how long it is supposed to take, but I continue to monitor it, so far I've seen over 12 millon inserts and counting. I suspect there's a faster way to do this.

Create your dumps with
pg_dump -Fc -Z 9 --file=file.dump myDb
Fc
Output a custom archive suitable for input into pg_restore. This is the most flexible format in that it allows reordering of loading data as well as object definitions. This format is also compressed by default.
Z 9: --compress=0..9
Specify the compression level to use. Zero means no compression. For the custom archive format, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all.
and restore it with
pg_restore -Fc -j 8 file.dump
-j: --jobs=number-of-jobs
Run the most time-consuming parts of pg_restore — those which load data, create indexes, or create constraints — using multiple concurrent jobs. This option can dramatically reduce the time to restore a large database to a server running on a multiprocessor machine.
Each job is one process or one thread, depending on the operating system, and uses a separate connection to the server.
The optimal value for this option depends on the hardware setup of the server, of the client, and of the network. Factors include the number of CPU cores and the disk setup. A good place to start is the number of CPU cores on the server, but values larger than that can also lead to faster restore times in many cases. Of course, values that are too high will lead to decreased performance because of thrashing.
Only the custom and directory archive formats are supported with this option. The input must be a regular file or directory (not, for example, a pipe). This option is ignored when emitting a script rather than connecting directly to a database server. Also, multiple jobs cannot be used together with the option --single-transaction.
Links:
pg_dump
pg_restore

Improve pg dump&restore
PG_DUMP | always use format directory with -j option
time pg_dump -j 8 -Fd -f /tmp/newout.dir fsdcm_external
PG_RESTORE | always use tuning for postgres.conf with format directory With -j option
work_mem = 32MB
shared_buffers = 4GB
maintenance_work_mem = 2GB
full_page_writes = off
autovacuum = off
wal_buffers = -1
time pg_restore -j 8 --format=d -C -d postgres /tmp/newout.dir/`
For more info
https://gitlab.com/yanar/Tuning/wikis/improve-pg-dump&restore

Why are you producing a raw .sql dump? The opening description of pg_dump recommends the "custom" format -Fc.
Then you can use pg_restore which will restore your data (or selected parts of it). There is a "number of jobs" option -j which can use multiple cores (assuming your disks aren't already the limiting factor). In most cases, on a modern machine you can expect at least some gains from this.
Now you say "I don't know how long this is supposed to take". Well, until you've done a few restores you won't know. Do monitor what your system is doing and whether you are limited by cpu or disk I/O.
Finally, the configuration settings you want for restoring a database are not those you want to run it. A couple of useful starters:
Increase maintenance_work_mem so you can build indexes in larger chunks
Turn off fsync during the restore. If your machine crashes, you'll start from scratch again anyway.
Do remember to reset them after the restore though.

The usage of pg_dump is generally recommended to be paired with pg_restore, instead of psql. This method can be split among cores to speed up the loading process by passing the --jobs flag as such:
$ pg_dump -Fc db > db.Fc.dump
$ pg_restore -d db --jobs=8 db.Fc.dump
Postgres themselves have a guide on bulk loading of data.
I also would recommend heavily tuning your postgresql.conf configuration file and set appropriately high values for the maintenance_work_mem and checkpoint_segments values; higher values on these may dramatically increase your write performance.

Related

Why is pg_restore that slow and PostgreSQL almost not even using the CPU?

I just had to use pg_restore with a small dump of 30MB and it took in average 5 minutes! On my colleagues' computers, it is ultra fast, like a dozen of seconds. The difference between the two is the CPU usage: while for the others, the database uses quite a bunch of CPU (60-70%) during the restore operation, on my machine, it stays around a few percents only (0-3%) as if it was not active at all.
The exact command was : pg_restore -h 127.0.0.1 --username XXX --dbname test --no-comments test_dump.sql
The originating command to produce this dump was: pg_dump --dbname=XXX --user=XXX --no-owner --no-privileges --verbose --format=custom --file=/sql/test_dump.sql
Look at the screenshot taken in the middle of the restore operation:
Here is the corresponding vmstat 1 result running the command:
I've looked at the web for a solution during a few hours but this under-usage of the CPU remains quite mysterious. Any idea will be appreciated.
For the stack, I am on Ubuntu 20.04 and postgres version 13.6 is running into a docker container. I have a decent hardware, neither bad nor great.
EDIT: This very same command worked in the past on my machine with a same common HDD but now it is terribly slow. The only difference I saw with others (for whom it is blazing fast) was really on the CPU-usage from my point of view (even if they have an SSD which shouldn't be at all the limiting factor especially with a 30 MB dump).
EDIT 2: For those who proposed the problem was about IO-boundness and maybe a slow disk, I just tried without any conviction to run my command on an SSD partition I just made and nothing has changed.
The vmstat output shows that you are I/O bound. Get faster storage, and performance will improve.
PostgreSQL, by default, is tuned for data durability. Usually transactions are flushed to the disk at each and every commit, forcing write-through of any disk write cache, so it seems to be very IO-bound.
When restoring database from a dump file, it may make sense to lower these durability settings, especially if the restore is done while your application is offline, especially in non-production environments.
I temporarily run postgres with these options: -c fsync=off -c synchronous_commit=off -c full_page_writes=off -c checkpoint_flush_after=256 -c autovacuum=off -c max_wal_senders=0
Refer to these documentation sections for more information:
14.4.9. Some Notes about pg_dump
14.5. Non-Durable Settings.
Also this article:
Settings for a fast pg_restore

The database backup in postgresql is five times larger than the database itself

My English is bad and I used a translator. I am doing backup using:
pg_dump -p 5432 db> backup.sql
At the moment, the weight of the backup reaches 4GB, and continues to grow more and more. At the same time, the database itself weighs only 700MB. I think that it should not be so, how can you optimize the creation of a backup and reduce its weight?
It is very unusual for a dump to be larger than the database; usually that means that you have bytea columns or large objects in there. But even with these I could not explain a factor of five.
Without understanding the cause, the solution is probably to use a compressed custom format dump:
pg_dump -F c -p 5432 -f backup.sql db

Backing a very large PostgreSQL DB "by parts"

I have a PostgreQL DB that is about 6TB. I want to transfer this database to another server using for example pg_dumpall. The problem I have is that I only have a 1TB HD. How can I do to copy this database to the other new server that has enough space? Let's suppose I can not get another HD. Is there the possibility to do partial backup files, upload them to the new server, erase the HD and get another batch of backup files until the transfer is complete?
This works here(proof of concept):
shell-command issued from the receiving side
remote side dumps through the network-connection
local side psql just accepts the commands from this connection
the data is never stored in a physical file
(for brevity, I only sent the table definitions, not the actual data: --schema-only)
you could have some problems with users and tablespaces (these are global for an installation in Postgres) pg_dumpall will dump+restore these, too, IIRC.
#!/bin/bash
remote=10.224.60.103
dbname=myremotedbname
pg_dump -h ${remote} --schema-only -c -C ${dbname} | psql
#eof
As suggested above if you have a fast network connection between source and destination you can do it without any extra disk.
However for a 6 TB DB (which includes indexes I assume) using the archive dump format (-Fc) could yield a database dump of less than 1 TB.
Regarding the "by parts" question: yes, it possible using the table pattern (-t, --table):
pg_dump -t TABLE_NAME ...
You can also exclude tables using -T, --exclude-table:
pg_dump -T TABLE_NAME ...
The above options (-t , -T) can be specified multiple times and can be even combined.
They also support patterns for specifying the tables:
pg_dump -t 'employee_*' ...

Faster method for copying postgresql databases

I have 12 test databases on a test server and I generate the dump from the main server and copy it to the test server every day. I populate the test databases using:
zcat ${dump_address} | psql $db_name
It takes 45 minutes for each database. Is it faster if I do that for just one database then use:
CREATE DATABASE newdb WITH TEMPLATE olddb;
for the rest? Are there any other methods I could try?
It depends how much data and indexes there is. Some ideas to speed things up include:
Make sure the dump does not INSERT commands - they tend to be much slower than COPY.
If you have many indexes or constraints, use the custom or archive dump format and then call pg_restore -j $my_cpu_count. -j controls how many threads are creating these concurrently, and index creation is often CPU-bound.
Use faster disks.
Copy $PGDATA in its entirety (rsync or some fancy snapshots).

PostgreSQL backup with smallest output files

We have a Postgresql database that is over 732 GB when backed as a file system backup. When we do a pg_dump we can get it down to 585 GB. If I combined the pg_dump with the PITR method will this give me the best backup with smallest backup data file size? My plan was to run the pg_start_backup, then the pg_dump, then the pg_stop_backup. I know the documentation states to run a file system backup but I want a smaller backup data set. I would then copy off WAL files and then backup them up at night.
To truly get the smallest file, you'll have to try compressing your pg_dump -Fc dump file with one of many compression tools and settings. Using gzip or xz with maximum possible compression would be a start. This will of course require an excellent CPU and lots of CPU time.