Backup postgresql WAL logs - postgresql

I try to configure backuping database in postgresql with pg_basebackup and WAL logs.
For now I created full backup once a week and want to backup wal logs too. But, as I understand, posgresql writes them all the time. So, how can I copy them and be shure that they are not corrupted?
Thanks

You set archive_command to a shell command that copies the WAL file to a safe archive location, so that burden is mostly on you.
When PostgreSQL runs archive_command, it assumes that the WAL file is not corrupted. Only a PostgreSQL bug or a bug in the storage system could cause a corrupted WAL segment.
There is no better protection against PostgreSQL bugs than always running the latest bugfix release, and you can invest in storage hardware that will at least detect failure.
You can also write your archive_command with a certain amount of paranoia, e.g. by comparing the md5sum of the WAL segment and its archive copy.
Another idea is to write two copies of the WAL file to different storage systems.

Related

Is it safe to delete archive_status log file in postgresql 10

I am not a DBA but i am using postgresql for production server and i am using postgresql 10 database. I am using Bigsql and i started replication of my production server to other server and on replication server everything is working but on my production server their is no space left. And after du command on my production server i am getting that pg_wal folder have 17 gb file and each file is of 16 mb size.
After some google search i change my postgresql.conf file as:
wal_level = logical
archive_mode = on
archive_command = 'cp -i %p /etc/bigsql/data/pg10/pg_wal/archive_status/%f'
i install postgresql 10 from Bigsql and did above changes.
After changes the dir /pg_wal/archive_status had 16 gb of log. So my question is that should i delete them manually or i have to wait for system delete them automatically.
And is that if i write archive_mode to on should that wal file getting removed automatically??
Thanks for your precious time.
This depends on how you do your backups and whether you'd ever need to restore the database to some point in time.
Only a full offline filesystem backup (offline meaning with database turned off) or an on-line logical backup with pg_dumpall will not need those files for a restore.
You'd need those files to restore a filesystem backup created while the database is running. Without them the backup will fail to restore. Though there exist backup solutions that copy needed WAL files automatically (like Barman).
You'd also need those files if your replica database will ever fall behind the master for some reason. Or you'd need to restore the database to some past point-in-time.
But these files compress pretty well - should be less than 10% size after compression - you can write your archive_command to compress them automatically instead of just copying.
And you should delete them eventually from the archive. I'd recommend to not delete them until they're at least a month old and also at least 2 full successful backups are done after creating them.

Which Postgresql WAL files can I safely remove from the WAL archive folder

Current situation
So I have WAL archiving set up to an independent internal harddrive on a data logging computer running Postgres. The harddrive containing the WAL archives is filling up and I'd like to remove and archive all the WAL archive files, including the initial base backup, to external backup drives.
The directory structure is like:
D:/WALBACKUP/ which is the parent folder for all the WAL files (00000110000.CA00000004 etc)
D:/WALBACKUP/BASEBACKUP/ which holds the .tar of the initial base backup
The question I have then is:
Can I safely move literally every single WAL file except the current WAL archive file, (000000000001.CA0000.. and so on), including the base backup, and move them to another hdd. (Note that the database is live and receiving data)
cheers!
WAL archives
You can use the pg_archivecleanup command to remove WAL from an archive (not pg_xlog) that's not required by a given base backup.
In general I suggest using PgBarman or a similar tool to automate your base backups and WAL retention though. It's easier and less error prone.
pg_xlog
Never remove WAL from pg_xlog manually. If you have too much WAL then:
your wal_keep_segments setting is keeping WAL around;
you have archive_mode on and archive_command set but it isn't working correctly (check the logs);
your checkpoint_segments is ridiculously high so you're just generating too much WAL; or
you have a replication slot (see the pg_replication_slots view) that's preventing the removal of WAL.
You should fix the problem that's causing WAL to be retained. If nothing seems to have happened after changing a setting run a manual CHECKPOINT command.
If you have an offline server and need to remove WAL to start it you can use pg_archivecleanup if you must. It knows how to remove only WAL that isn't needed by the server its self ... but it might break your archive-based backups, streaming replicas, etc. So don't use it unless you must.
WAL files are incremental, so the simple answer is: You cannot throw any files out. The solution is to make a new base backup and then all previous WALs can be deleted.
The WAL files contain individual statements that modify tables so if you throw some older WALs out, then the recovery process will fail (it will not silently skip missing WAL files) because the state of the database cannot be restored reliably. You can move the WAL files to some other location without upsetting the WAL process but then you'd have to make all WAL files available again from a single location if you ever need to recover your database from some point in the past; if you are running out of disk space then that may mean recovering from some location where you have enough space to store the base backup and all WAL files. The main issue here is if you can do that fast enough to restore a full database after an incident.
Another issue is that if you cannot identify where/when a problem occurred that needs to be corrected your only option is to start with the base backup and then replay all the WAL files. This procedure is not difficult, but if you have an old base backup and many WAL files to process, this simply takes a lot of time.
The best approach for your case, in general, is to make a new base backup every x months and collect WALs with that base backup. After every new base backup you can delete the old base backup and its subsequent WALs or move them to cheap offline storage (DVD, tape, etc). In the case of a major incident you can quickly restore the database to a known correct state from the recent base backup and the relatively few WAL files collected since then.
A solution that we went for, is executing pg_basebackup every night. This would create a base backup and later on we can use pg_archivecleanup to clean up all the "old" WAL files before that base using something like
"%POSTGRES_INSTALLDIR%\bin\pg_archivecleanup" -d %WAL_backup_dir% %newestBaseFile%
Fortunately, we never had to recover yet, but it should work in theory.
In case someone found this by searching how to safely cleanup the WAL directory under a replication architecture, consider the scenario where there might be left overs from offline replicas, in this case, unused replica slots waiting for the replica to come back online and thus keeping a lot of WAL archives on the Master DB.
In our case we had an issue with a replica going down due to hardware failure, we had to recreate it along with its replica_slot on the Master DB but forgot to get rid of the previous used one. Once we cleared that out PSQL got rid of unused WALs and all was good.
You can add the script to automatically clean or remove pg_wal files. This will work in pg-11 version. If you want to use other psql version the you can simply replace the command "/usr/pgsql-11/bin/pg_archivecleanup" to /usr/pgsql-12/bin/pg_archivecleanup or 13 as per your wish.
#!/bin/bash
/usr/pgsql-11/bin/pg_controldata -D /var/lib/pgsql/11/data/ > pgwalfile.txt
/usr/pgsql-11/bin/pg_archivecleanup -d /var/lib/pgsql/11/data/pg_wal $(cat pgwalfile.txt | grep "Latest checkpoint's REDO WAL file" | awk '{print $6}')

pg_archivecleanup and streaming replication

Using postgres 9.3.
I'm a bit confused on the proper usage of pg_archivecleanup.
I'm using both streaming replication and backup with continuous archiving for PITR recovery.
I don't think I can configure pg_archivecleanup in recovery.conf on the standby as it wouldn't achieve anything. The master is not archiving to a location accessible to the standby. The master is archiving to a location on its local disk, and then those archives and the associated backup are being rsync'd to a large backup disk.
So, it seems the solution would be to run pg_archivecleanup in "standalone" mode on the master, such as:
/usr/lib/postgresql/9.3/bin/pg_archivecleanup -d /archive 0000000100000010000000F0.00000028.backup
So, I'd do a cron job that would run the pg_archivecleanup command for any .backup files which are older than the latest one, and then delete those backup files, leaving only the latest one.
Is my understanding and plan correct?
If you want to retain only WAL segments after the latest base backup, you simply run pg_archivecleanup in standalone mode for the latest .backup file (not for those older than the latest).
But do you really want to have only one available backup? First of all, you won't be able to restore to the point before the last backup. Second, it makes sense to have some backups just in case (corruptions, etc).
And it seems strange to archive segments to local disk and then rsync them elsewhere. Why not putting your rsync (and then sync to flush OS buffers to disk) into archive_command? This ensures that the segment won't be removed from pg_xlog before it reaches the destination.

Disabling WAL archiving during pg_restore?

Let's say I made a quick backup dump of my Postgres 9.3 DB through pg_dump before doing a large destructive migration and I discovered I want to undo it. No writes were performed against the DB in the meantime.
Say I run pg_restore -c -d mydb < foo.dump to load the dump back into the db. Assuming I have WAL-E set up to archive every 16mb of WAL, do I need to turn off archive_mode before performing the restore? It would not be super useful for me to archive the xlog as I'm writing the dump back into the DB, since I already have perfectly valid base backups and WAL segments archived for before the dump. Also there are serious consequences to performing xlog shipping as I'm restoring the dump, which get worse with the size of the dump.
Do you end up disabling archiving before a restore? Do you do anything else to speed things up? There's a discussion of restore performance in this post, but it doesn't cover archiving at all, unless I missed something.
You can't really turn WAL archiving on and off like that. WAL replay requires continuity.
If you turned WAL archiving off, then made changes, then turned it back on, the new WAL generated after turning it off and on again would be useless. They could not be applied to the DB, and you'll have to make a new base backup before you can resume WAL replay / PITR.
If you turn xlog shipping off during a restore, you'll want to purge your old base backup and WAL archives, then create a new base backup before resuming WAL shipping.

Best method for PostgreSQL incremental backup

I am currently using pg_dump piped to gzip piped to split. But the problem with this is that all output files are always changed. So checksum-based backup always copies all data.
Are there any other good ways to perform an incremental backup of a PostgreSQL database, where a full database can be restored from the backup data?
For instance, if pg_dump could make everything absolutely ordered, so all changes are applied only at the end of the dump, or similar.
Update: Check out Barman for an easier way to set up WAL archiving for backup.
You can use PostgreSQL's continuous WAL archiving method. First you need to set wal_level=archive, then do a full filesystem-level backup (between issuing pg_start_backup() and pg_stop_backup() commands) and then just copy over newer WAL files by configuring the archive_command option.
Advantages:
Incremental, the WAL archives include everything necessary to restore the current state of the database
Almost no overhead, copying WAL files is cheap
You can restore the database at any point in time (this feature is called PITR, or point-in-time recovery)
Disadvantages:
More complicated to set up than pg_dump
The full backup will be much larger than a pg_dump because all internal table structures and indexes are included
Does not work well for write-heavy databases, since recovery will take a long time.
There are some tools such as pitrtools and omnipitr that can simplify setting up and restoring these configurations. But I haven't used them myself.
Also check out http://www.pgbackrest.org
pgBackrest is another backup tool for PostgreSQL which you should be evaluating as it supports:
parallel backup (tested to scale almost linearly up to 32 cores but can probably go much farther..)
compressed-at-rest backups
incremental and differential (compressed!) backups
streaming compression (data is compressed only once at the source and then transferred across the network and stored)
parallel, delta restore (ability to update an older copy to the latest)
Fully supports tablespaces
Backup rotation and archive expiration
Ability to resume backups which failed for some reason
etc, etc..
Another method is to backup to plain text and use rdiff to create incremental diffs.