I created two tablespaces one for tables and the second for indexes. I use postgresql V 14.5 (win 10).
With pg_basebackup of the cluster, I get 5 files :
base.tar
pg_wal.tar
21511.tar
21512.tar
backup_manifest
However, when I restore the base.tar with the command : tar xvf .... only the cluster files are restored and I don't know where to restore the tar tablespaces.
How is the workflow to achieve the restore correctly in win 10 ?
Thank you in advance for your help
Related
Creating backup in directory format using
pg_dump -f "sba" -Fdirectory --jobs=32 --verbose sba
throws error
pg_dump: error: could not stat file "sba/282168.data.gz": value too large
How to create backup in format from which tables can selectively restored?
This file size is 6.2 GB in NTFS. Was backup created successfully? Is this only informational message?
Server is Postgres 12 running in Debian Linux 10 under WSL
Client is pg_dump from Postgres 15 running in Windows 11
In a postgresql, how to create WAL archieve ?
Step by step explanation is available for Backup and Restore
using Linux or shell command?
Enviroment :
OS : LINUX RHEL 7.4
DB : Postgresql 9.2
is there any options without stopping postgresql complete backup as well as restore ?
yes, it is described https://www.postgresql.org/docs/9.2/static/continuous-archiving.html
with examples for unix like system.
you dont have to stop postgres to take backup, neither base backup, nor wals
We have a simple database with just 5 tables. But 1 table is huge, around 100GB of data by itself, and the indices together are nearly double that size. The server is an old CentOS 5 server with PG 9.0. I'm moving to a more modern setup with SSD hard disks, CentOS 7, and PG 9.6.
Question: what's the best way to migrate data in a simple way. pg_dump it on the old server, move it via rsync or something to the new server and pg_restore? I could do the pg_dump with -Fc option, so that we can pg_restore it easily (otherwise it's a text format and we have to use psql -f instead). But a trial run suggested that while the pg_dump is OK, the pg_restore on the destination server, which is much faster, goes on and on. We did a pg_restore --verbose, but there was no verbosity at all. Perhaps the server was stuck doing IO?
Our pg.conf settings for the pg_restore are as follows:
maintenance_work_mem = 1500MB
fsync = off
synchronous_commit = off
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
max_wal_senders = 0
wal_keep_segments = 0
archive_mode = off
autovacuum = off
What should we do to ensure that the pg_restore works? Right now both servers are offline, so I can do pretty much anything needed -- any settings can be changed.
Some more background info--
Old server: CentOS 5, SCSI RAID 1 disks, 4GB RAM (not much), PG 9.0
New server: CentOS 7 (latest), SSD disk, 16GB RAM, PG 9.6
Thank you for any pointers on moving large tables in the best way possible. The usual PG documentation doesn't seem to be helping. We've tried both the text dump way and the -Fc way.
I strongly suggest you pg_upgrade:
Install 9.0.23 on the new server. From source if necessary.
Set up a streaming replica on the new server using pg_basebackup and a suitable recovery.conf. Enable WAL archiving and restore_command too, in case it becomes desynchronised for any reason.
Also install 9.6 on the new server
Do an upgrade test by stopping the replica and attempting a pg_upgrade to 9.6. Restart the replica, fix any issues and repeat until you succeed.
When you're confident pg_upgrade will succeed, plan a cut-over time. Stop the 9.0 master and stop the replica. pg_upgrade the replica. Start the new 9.6 server.
See the pg_upgrade documentation for more info.
Remember: KEEP BACKUPS.
If you want simple, just pg_dumpall and then pipe to psql. But that'll be slow and it'll cause problems if your restore fails partway through then you try to resume, etc.
Better:
If you don't want to use replication, then use parallel-mode pg_dump and pg_restore with directory format input/output if you want to get things done quickly.
Configure your 9.0 database to accept connections from the 9.6 host and make sure there's a high-performance network connection (gigabit or better).
Using the 9.6 host, running the 9.6 versions of pg_dump and pg_dumpall:
Dump your global objects with pg_dumpall --globals-only -f globals.sql
Dump your database(s) with pg_dump -Fd -j4 -d dbname -f dbname.dumpdir or similar. -j is the number of parallel jobs. You'll need to dump each database separately if there are multiple ones.
Cleanly initdb a new PostgreSQL 9.6 install, removing whatever attempts you have previously made (since I don't know what is/isn't there). Alternately, DROP any created roles, databases, etc, returning it to a clean state.
Use psql to run the globals script: psql -v ON_ERROR_STOP=1 --single-transaction -f globals.sql -d postgres
Use pg_restore to load the database dumps: pg_restore --create -d template1 -j4 template1 dbname.dump, repeating for each dumped DB. You can restore multiple DBs concurrently.
Yes, I know the handling of global objects sucks. And yes, it'd be nice if all this were wrapped up in a simple command. But it isn't. Designs and well thought out patches are welcome if you want to try to improve this. So far nobody's wanted to enough to do the work.
I'm trying to create a hot standby server using PostgreSQL 9.3.5 and Red Hat 6.5
I receive the folowing error when running pg_basebackup:
$ pg_basebackup -h 172.28.250.10 -D /var/lib/pgsql/9.3/data -U replicador -v -P
pg_basebackup: could not create directory "/var/lib/pgsql/9.3/data/osm_indices":
File exists
/var/lib/pgsql/9.3/data exists and is empty when I launch the tool and when it fails there is data at /var/lib/pgsql/9.3/data/osm_indices. The DB has 5 tablespaces and 4 are completely copied.
Both servers are running the same O.S. and DB server version.
I've tried the same with 2 different masters and 3 slaves with the same result, but not always is the same tablespaces that fails to copy.
Thanks,
Luis.
It looks like you might have tablespaces inside the data directory.
You should not do that. Tablespaces are meant to be separate paths, and some of the tools assume that they will be.
Move the tablespaces outside the datadir and pg_basebackup should behave, so long as you have corresponding paths on the destination server.
I work with PostgreSql and PgAdmin, and I have had an accident that drives to format my Mac, here I have two options, recover all system via TimeMahine or install all from zero.
I have chosen install from zero, but I need to recover the old PostgreSql database, now I’m installing all again, pgAdmin… etc, but I don’t have any backup of my old database, I have the old PostgreSql database system files (thanks to TimeMachine),
How can I recover the old database to the new one?
I tried to do this, from http://www.postgresql.org/docs/8.1/static/backup-file.html
tar -cf backup.tar /usr/local/pgsql/data
the problem is that the data files are (I think) in /Library/PostgreSQL/9.1/data and the folder is encrypted or hidden ¿?
and I can't execute tar command this is what happens:
tar -cf backup.tar "/Volumes/backup/Backups.backupdb/MacBook, MacBook Pro de Albert/2014-04-30-112220/Macintosh HD/Library/PostgreSQL/9.1/data"
I get this error:
tar: /Volumes/backup/Backups.backupdb/MacBook, MacBook Pro de Albert/2014-04-30-112220/Macintosh HD/Library/PostgreSQL/9.1/data: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.
Another option was to copy the TimeMachine folder /9.1/ beside the new 9.3 and try this:
How to restore a file system level copy of a PostgreSQL database (not dump) to a different PC
but when executing pg_dump asked me a pasword I didn't had.
The solution for dummies: Uninstall pgAdmin 9.3, and install pgAdmin 9.1, here detects the folder 9.1 and tell me It was going to use it ¡perfect! it is exactly what I need.
Now I can keep 9.1 or upgrade to 9.3