Compile PostgreSQL with reverse endianness - postgresql

My old iMac G5 died recently. I had a PostgreSQL instance running there, and I kept backups. But when I went to restore them in my newer Macbook Pro, I realized that I couldn't do the restore due to byte endianness differences.
Is there any compile flags that I can pass to PostgreSQL's configure script to use a reversed endianness on Intel, allowing me to restore from backup, then do a SQL dump, and restore it on a default endianness instance?
Actually, I don't even know if I need to recompile PostgreSQL, perhaps there is some setting that someone can point me to to allow me to do the restore?

No. If your backups were made with pg_dump they'd be endian-independent, even if you used the binary format.
If you took a copy of the data directory (with the database shutdown, or using pg_start_backup()/pg_stop_backup()) you need to restore on a machine with the same endianness. You're probably best off importing it in a virtual machine with the same endianness somewhere, then dump from there with pg_dump and reload on your new machine.
If you took the backup from disk without shutting down PostgreSQL, the same basic thing holds, except you're also going to have to try to recover from a corrupt backup. That can be little work or hundreds of hours or more work depending on how much traffic you had on the database, and how lucky you are.

Related

Database restore from a hacked system

A linux VM with postgres 9.4 was hacked into. (Two processes taking 100% cpu, weird files in /tmp, did not reoccur after kill(s) and restart.) It was decided to install the system from scratch on a new machine (with postgres 9.6). The only data needed was in one of postgres databases. A pg_dump of the database was made after the attack.
Regardless of whether the data - the tables/rows/etc. - were modified during the attack: is it safe to restore the database in the new system?
I consider using pg_restore with the -O option (ignores the user permissions)
The two dangers are:
important data could have been modified
back doors could have been installed in your database
With the first, you're on your own how to verify that your data are ok. The safest thing would be to use a backup from before the machine was compromized, but this would mean data loss.
For the second, I would run a pg_dumpall -s and spend a day reading it carefully. Compare it with a dump from a backup made before the breach. Watch out for weird object and column names and functions with SECURITY DEFINER.

Can I copy the postgresql /base directory as a DB backup?

Don't shoot me, I'm only the OP!
When needing to backup our DB, we always are able to shutdown postgresql completely. After it is down, I found I could copy the "/base" directory with the binary data in it to another location. Checksum the accuracy and am later able to restore that if/when necessary. This has even worked when upgrading to a later version of postgresql. Integrity of the various 'conf' files is not an issue as that is done elsewhere (ie. by other processes/procedures!) in the system.
Is there any risk to this approach that I am missing?
The "File System Level Backup" link in Abelisto's comment is what JoeG is talking about doing. https://www.postgresql.org/docs/current/static/backup-file.html
To be safe I would go up one more level, to "main" on our ubuntu systems to take the snapshot, and thoroughly go through the caveats of doing file-level backups. I was tempted to post the caveats here, but I'd end up quoting the entire page.
The thing to be most aware of (in a 'simple' postgres environment ) is the relationship between the postgres database, a user database and the pg_clog and pg_xlog files. If you only get the "base" you lose the transaction and WAL information, and in more complex installations, other 'necessary' information.
If those caveat conditions listed do not exist in your environment, and you can do a full shutdown, this is a valid backup strategy, which can be much faster than a pg_dump.

MongoDB 2Gb limit - can't compact database

I have been adding files to GridFS in my 32bit Mongo database. It eventually failed when the size of all Mongo files hit 2Gb. So, I then deleted the files in GridFS. I've tried running the repairDatabase() command, but it fails, saying "mongo requires 64bit for larger datasets". I get the same error trying to run the compact command against GridFS.
So, I've hit the 2Gb limit, but it won't let me compact or repair because it doesn't have space. Talk about Catch22!!
What do I do?
Edit
This is an immediate problem I have - how do I compact the database right now?
I think the only recourse is to upgrade to a 64-bit OS.
I had the same problem on my database and I solved it such way. At first I created Amazon EC2 64-bit instance and moved database files from 32-bit instance via plain copy. Then I made all needed cleanups in database on 64-bit instance and made dump with mongodump. This dump I moved back to 32-bit instance and restored database from it.
If you need to restore database with same name, that you had before, you can just rename your old db-files in dbpath (files have database name in their name)
And of course, you should move to upgrade to 64-bit later. MongoDB on 32-bit OS is very bad in support.
shot in the dark here... you could try opening a slave off the master (in 64 bit) and see if you can force a replication over to the slave, essentially backing up your data. I have no idea if this would actually work, as it's pretty clear that 32bit has a 2gig limit (all their docs warn about this :( ), but thought I'd at least post a somewhat potentially creative solution..

Backing up the DB vs. backing up the VM

We're serving a Django/Postgres site running on a VM hypervisor. We're now trying to figure out our back up strategy and have two probable options:
Back up the DB directly using pg_dump
Back up the VM directly by copying the VM image
I'm with the latter as I think, I could simply back up everything that has to do with the site. I'm not sure whether I have to shut down the VM for this though.
What is a better and more recommended way of backing up a DB? Are there any reasons for not using the VM backup?
Thanks
The question basically boils down to, can you consider a hot copy of PostgreSQL's data files a backup?
The answer is: not really. PostgreSQL tries very hard through the use of WAL to ensure that its files are in a consistent state all the time and that it can survive a power failure, but starting it up from a copy of these files puts PostgreSQL into recovery mode. If the backup happened at the wrong second and PostgreSQL can't recover from the state of these files, your backup is useless. You don't want your backup/restore mechanism to depend on the recovery mechanism (unless you're dealing with "crash only" software, which PostgreSQL is not).
The probability of PostgreSQL not being able to recover from these files is not high, but it's not zero either. The probability of PostgreSQL not being able to load an SQL dump that it made, on the other hand, is zero. I prefer backup choices with lower probabilities of failure. pg_dump was designed for doing backups.
PostgreSQL recommends using pg_dump for backups, as a file system (or VM) backup requires the database to be shut down (and has other drawbacks):
http://www.postgresql.org/docs/8.1/static/backup-file.html
Edit: Also, a pg_dump backup will be significantly smaller than a filesystem dump of the same database.
There is an additional option. With PostgreSQL you can make an online backup that allows you to snapshot the file system and maintain consistency. You can see details here:
http://www.postgresql.org/docs/9.0/static/continuous-archiving.html
We use this exact method for making backups when we run PostgreSQL in a VM.

Best method for PostgreSQL incremental backup

I am currently using pg_dump piped to gzip piped to split. But the problem with this is that all output files are always changed. So checksum-based backup always copies all data.
Are there any other good ways to perform an incremental backup of a PostgreSQL database, where a full database can be restored from the backup data?
For instance, if pg_dump could make everything absolutely ordered, so all changes are applied only at the end of the dump, or similar.
Update: Check out Barman for an easier way to set up WAL archiving for backup.
You can use PostgreSQL's continuous WAL archiving method. First you need to set wal_level=archive, then do a full filesystem-level backup (between issuing pg_start_backup() and pg_stop_backup() commands) and then just copy over newer WAL files by configuring the archive_command option.
Advantages:
Incremental, the WAL archives include everything necessary to restore the current state of the database
Almost no overhead, copying WAL files is cheap
You can restore the database at any point in time (this feature is called PITR, or point-in-time recovery)
Disadvantages:
More complicated to set up than pg_dump
The full backup will be much larger than a pg_dump because all internal table structures and indexes are included
Does not work well for write-heavy databases, since recovery will take a long time.
There are some tools such as pitrtools and omnipitr that can simplify setting up and restoring these configurations. But I haven't used them myself.
Also check out http://www.pgbackrest.org
pgBackrest is another backup tool for PostgreSQL which you should be evaluating as it supports:
parallel backup (tested to scale almost linearly up to 32 cores but can probably go much farther..)
compressed-at-rest backups
incremental and differential (compressed!) backups
streaming compression (data is compressed only once at the source and then transferred across the network and stored)
parallel, delta restore (ability to update an older copy to the latest)
Fully supports tablespaces
Backup rotation and archive expiration
Ability to resume backups which failed for some reason
etc, etc..
Another method is to backup to plain text and use rdiff to create incremental diffs.