Postgresql PITR backup: best practices to handle multiple databases? - postgresql

Hy guys, i have a postgresql 8.3 server with many database.
Actually, im planning to backup those db with a script that will store all the backup in a folder with the same name of the db, for example:
/mypath/backup/my_database1/
/mypath/backup/my_database2/
/mypath/backup/foo_database/
Every day i make 1 dump each 2 hours, overwriting the files every day... for example, in the my_database1 folder i have:
my_database1.backup-00.sql //backup made everyday at the 00.00 AM
my_database1.backup-02.sql //backup made everyday at the 02.00 AM
my_database1.backup-04.sql //backup made everyday at the 04.00 AM
my_database1.backup-06.sql //backup made everyday at the 06.00 AM
my_database1.backup-08.sql //backup made everyday at the 08.00 AM
my_database1.backup-10.sql //backup made everyday at the 10.00 AM
[...and so on...]
This is how i actually assure myself to be able to restore everydatabase loosing at least 2 hours of data.
2 hours still looks too much.
I've got a look to the postgresql pitr trought the WAL files, but, those files seem to contain all the data about all my database.
I'll need to separate those files, in the same way i do separate the dump files.
How to?
Otherwise, there is another easy-to-install to have a backup procedure that allo me to restore just 1 backup at 10 seconds earlier, but without creating a dump file every 10 seconds?

It is not possible with one instance of PostgresSQL.
You can divide your 500 tables between several instances, each listening on different port, but it would mean that they will not use resources like memory effectively (memory reserved but unused in one instance can not be used by another).
Slony will also not work here, as it does not replicate DDL statements, like dropping a table.
I'd recommend doing both:
continue to do your pg_dump backups, but try to smooth it - throttle pg_dump io bandwith, so it will not cripple a server, and run it continuously - when it finishes with the last database then immediately start with a first one;
additionally setup PITR.
This way you can restore a single database fast, but you can loose some data. If you'll decide that you cannot afford to loose that much data then you can restore your PITR backup to a temporary location (with fsync=off and pg_xlog symlinked to ramdisk for speed), pg_dump affected database from there and restore it to your main database.

Why do you want to separate the databases?
The way the PITR works, it is not possible to do since it works on the complete cluster.
What you can do in that case is to create a data directory and a separate cluster for each of those databases (not recommended though since it will require different ports, and postmaster instances).
I believe that the benefits of using PITR instead of regular dumps outweigh having separate backups for each database, so perhaps you can re-think the reasons for why you need to separate it.
Another way could be to set up some replication with Slony-I but that would require a separate machine (or instance) that receives the data. On the other hand, that way you would have a replicated system in near real-time.
Update for comment:
To recover from mistakes, like deleting a table, PITR would be perfect since you can replay to a specific time. However, for 500 databases I understand that can be a lot of overhead. Slony-I would probably not work, since it is replicating. Not sure how it handles table deletions.
I am not aware of any other ways you can go. What I would do would probably still be going for PITR and just not do any mistakes ;). Jokes aside, depending how frequently mistakes are being made this could be a solution:
Set it up for PITR
have a second instance ready on standby.
When a mistake happens, replay the restore to the point in time on the second instance.
Do a pg_dump of the affected database from that instance.
Do a pg_restore on the production instance for that database.
However, it would require you to have a second instance ready, either on the same server or a different one (different is recommended). Also, the restore time would be a bit longer since it would require you to do one extra dump and restore.

I think the way you are doing this is flawed. You should have one database with multiple schemas and roles. Then you can use PITR. However PITR is not a replacement for dumps.

Related

wal-e/wal-g any benefit for simple backup and restore via S3

I'm using AWS RDS and have a need to replicate "database_a" in an RDS instance to "database_a" in a different RDS instance. The replication only needs to be once every 24 hours.
I'm currently solving this with pg_dump and pg_restore but am wondering if there is a better (ie faster/more efficient) way I can go about things.
Using wal-e/g and RDS, is it at all possible for my use case to simply push the latest changes from the last say 24 hours? The 2 RDS cannot speak to each other so all connection would be by S3. I'm not clear what the docs mean by 'When uploading backups to S3, the user should pass in the path containing the backup started by Postgres:' - does this mean i can create a pg backup to my EC2 and then point wal-g at this backup?
Finally, is it at all possible to just use wal-e/g for complete backups (ie non incremental) just as i am doing now with pg_dump/pg_restore and in doing so would I see a speed improvement by switching?
Thanks in advance,
In a word, yes.
On a system using dump/restore, you're consuming a lot more CPU and network resources (therefore costs) which you could reduce notably by using the WALs for incremental backups, and only doing an image perhaps once a week. This is especially true if your database is mostly data that doesn't change. It might be incorrect if your database is not growing but is made of records that are updated many times per 24 hours (e.g. stock prices).
Once you are publishing WALs to S3 frequently, then you'll have a far more up to date backup than nightly backups.
When publishing WALs you can recover to any point in time
WAL-E and WAL-G both have built in encryption
There is also differential backup support, but not something I've played with

How to see changes in a postgresql database

My postresql database is updated each night.
At the end of each nightly update, I need to know what data changed.
The update process is complex, taking a couple of hours and requires dozens of scripts, so I don't know if that influences how I could see what data has changed.
The database is around 1 TB in size, so any method that requires starting a temporary database may be very slow.
The database is an AWS instance (RDS). I have automated backups enabled (these are different to RDS snapshots which are user initiated). Is it possible to see the difference between two RDS automated backups?
I do not know if it is possible to see difference between RDS snapshots. But in the past we tested several solutions for similar problem. Maybe you can take some inspiration from it.
Obvious solution is of course auditing system. This way you can see in relatively simply way what was changed. Depending on granularity of your auditing system down to column values. Of course there is impact on your application due auditing triggers and queries into audit tables.
Another possibility is - for tables with primary keys you can store values of primary key and 'xmin' and 'ctid' hidden system columns (https://www.postgresql.org/docs/current/static/ddl-system-columns.html) for each row before updated and compare them with values after update. But this way you can identify only changed / inserted / deleted rows but not changes in different columns.
You can make streaming replica and set replication slots (and to be on the safe side also WAL log archiving ). Then stop replication on replica before updates and compare data after updates using dblink selects. But these queries can be very heavy.

What's the difference between Heroku's Postgres Continuous Protection vs including a Follower database for integrity and recovery

I'm considering deploying an app to Heroku along with a Postres Standard database plan. I'm keen on ensuring data integrity and ensuring in no case that my customer's data can be lost if the database becomes corrupted or some other similar issue. I also want to ensure a smooth recovery process in tis even. So I have the following questions:
First, I'm assuming with Continuos there's a still a possibility
that a database can become corrupted. Is this true?
What's provides more
integrity, protection, and ease of recovery if a database becomes
corrupted: Standard DB / with Continuos Protection or Standard DB
with a Follower DB.
If by chance the DB
becomes corrupted, or an database integrity issues arise, how will
Heroku remediate (given the database is a "managed" service). Is it
automated or I have to work with Support manually to remediate?
I would love to hear your thoughts on this. My experience in the past has been with MySQL but not Postgres, which I hear great things about.
Thanks
Caveat: I have some experience with Postgresql, but I don't have any experience with Heroku as such.
What Heroku calls 'Continuous Protection' and 'follower' databases are implemented using Postgresql's Continuous Archiving and streaming replication functionality. They have provided a range of administrative tools and infrastructure around these functions to make them easier to use.
Both of these functions make use of the fact that Postgresql writes all updates that it is making to databases in a Write-Ahead Log (WAL).
With Continuous Archiving, one takes a complete copy of all of the underlying files in the database - this is referred to as the base backup. One also collects all WAL files produced by the database, both during and after production of the base backup. Note that you do not need to stop the database in order to make the base backup - it is a fairly unobtrusive process.
If the worst happens, and it is necessary to recover the database from the backup, you just restore the base dump, configure the database so it knows where to find the archived WAL files, and start it up. It will then replay the WAL files in sequence until it is fully up to date.
Note that you can also stop the replay early. This can be extremely useful, as you will see in my answer to your first question:
First, I'm assuming with Continuos there's a still a possibility that
a database can become corrupted. Is this true?
Yes, of course. Database corruption can happen for a number of reasons: hardware failure, a software fault in the database, a fault in your application, or even operator error.
One of the benefits of continuous archiving, though, is that you can replay the WAL files up to a particular point in time, so you can effectively rewind back to the point immediately before the database became corrupted.
As mentioned above, a Follower DB uses Postgresql's 'Streaming Replication' function. With this function, you restore your base backup onto another server, configure it to connect to the master database and fetch WAL files in real time as they are produced. The follower then keeps up to date with any changes made on the master.
Whats provides more integrity, protection, and ease of recovery if a
database becomes corrupted: Standard DB / with Continuos Protection or
Standard DB with a Follower DB.
Ease of recovery is the difference.
If you have a Follower DB, it is a hot standby - if the master fails for some reason, you can switch your application over to the follower with minimal downtime. On the other hand, if you have a large database and you have to restore it from the last base backup and then replay all the WAL files produced since - well that could take a long time, days even if it was a really large database.
Note that also, however, that a follower DB will be of no use if your database becomes corrupted due to, for example, an administrator accidentally dropping the wrong table. The table will be dropped in the follower only a few seconds later. They are like lemmings going over a cliff. The same applies if your application corrupts the database due to a bug, or a hacker, or whatever. Even with the follower, you must have a proper backup in place, either a Continuous Archive or a normal pg_dump.
If by chance the DB becomes corrupted, or an database integrity issues
arise, how will Heroku remediate (given the database is a "managed"
service). Is it automated or I have to work with Support manually to
remediate?
Their documentation indicates that premium plans do feature automated failover. This would be useful in the event of a hardware or platform failure and most kinds of database failure, where the system can detect that the master database has gone down and initiate a failover.
In the case where the database becomes corrupted by the application itself (or a hasty admin) then I suspect you would have to manually initiate failover.

postgres copy database to another server reduces database size

Installed postgres 9.1 in both the machine.
Initially the DB size is 7052 MB then i used the following command for copy to another server.
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
After successfully copies, In destination machine i check size it shows 6653 MB.
then i checked for table count its same.
Has there been data loss? Is there missing data?
Note:
Two machines have same hardware and software configuration.
i used:
SELECT pg_size_pretty(pg_database_size('dbname'));
One of the PostgreSQL's most sophisticated features is so called Multi-Version Concurrency Control (MVCC), a standard technique for avoiding conflicts between reads and writes of the same object in database. MVCC guarantees that each transaction sees a consistent view of the database by reading non-current data for objects modified by concurrent transactions. Thanks to MVCC, PostgreSQL has great scalability, a robust hot backup tool and many other nice features comparable to the most advanced commercial databases.
Unfortunately, there is one downside to MVCC, the databases tend to grow over time and sometimes it can be a problem. In recent versions of PostgreSQL there is a separate server process called the autovacuum daemon (pg_autovacuum), whose purpose is to keep the database size reasonable. It does that by trying to recover reusable chunks of the database files. Still, there are many scenarios that will force the database to grow, even if the amount of the useful data in it doesn't really change. That happens typically if you have lots of UPDATE and/or DELETE statements in the applications that are using the database.
When you do a COPY, you recover extraneous space and so your copied DB appears smaller.
That looks normal. Databases are often smaller after restore, because a newly created b-tree index is more compact than one that's been progressively built by inserts. Additionally, UPDATEs and DELETEs leave empty space in the tables.
So you have nothing to worry about. You'll find that if you diff an SQL dump from the old DB and a dump taken from the just-restored DB, they'll be the same except for comments.

is it possible to fork a mysqldump of data?

I am restoring a mysql database with perl on a remote server with about 30 million records. It's taking > 2 days & looking at my network connections I am not fully utilizing my uplink bandwidth. I will need to do this at least 1x per week. Is there a way to fork a mysqldump (I'm using perl) so that I can take full advantage of my bandwidth (I don't mind if I'm choked off for a bit...I just need to get this done faster).
Can't you upload the whole dump to the remote server and start the restore there?
A restore of a mysqldump is just the execution of a long series of commands that would restore your database from scratch. If the execution path for that is; 1) send command 2) remote system executes command 3) remote system replies that the command is complete 4) send next command, then you are spending most of your time waiting on network latency.
I do know that most SQL hosts will allow you to upload a dump file specifically to avoid the kinds of restore time that you're talking about. The company that takes my money each month even has a web-based form that you can use to restore a database from a file that has been uploaded via sftp. Poke around your hosting service's documentation. They should have something similar. If nothing else (and you're comfortable on the command line) you can upload it directly to your account and do it from a shell there.
mk-parallel-dump and mk-parallel-restore are designed to do what you want, but in my testing mk-parallel-dump was actually slower than plain old mysqldump. Your mileage may vary.
(I would guess the biggest factor would be the number of spindles your data files reside on, which in my case, 1, was not especially conducive to parallelization.)
First caveat: mk-parallel-* writes a bunch of files, and figuring out when it's safe to start sending them (and when you're done receiving them) may be a little tricky. I believe that's left as an exercise for the reader, sorry.
Second caveat: mk-parallel-dump is specifically advertised as not being for backups. Because "At the time of this release there is a bug that prevents --lock-tables from working correctly," it's really only useful for databases that you know will not change, e.g., a slave that you can STOP SLAVE on with no repercussions, and then START SLAVE once mk-parallel-dump is done.
I think a better solution than parallelizing a dump may be this:
If you're doing your mysqldump on a weekly basis, you can just do it once (dumping with --single-transaction (which you should be doing anyway) and --master-data=n) and then start a slave that connects over an ssh tunnel to the remote master, so the slave is continually updated. The disadvantage is that if you want to clone a local copy (perhaps to make a backup) you will need enough disk to keep an extra copy around. The advantage is that a week's worth of (query-based) replication log is probably quite a bit smaller than resending the data, and also it arrives gradually so you don't clog your pipe.
How big is your database in total? What kind of tables are you using?
A big risk with backups using mysqldump has to do with table locking, and updates to tables during the backup process.
The mysqldump backup process basically works as follows:
For each table {
Lock table as Read-Only
Dump table to disk
Unlock table
}
The danger is that if you run an INSERT/UPDATE/DELETE query that affects multiple tables while your backup is running, your backup may not capture the results of your query properly. This is a very real risk when your backup takes hours to complete and you're dealing with an active database. Imagine - your code runs a series of queries that update tables A,B, and C. The backup process currently has table B locked.
The update to A will not be captured, as this table was already backed up.
The update to B will not be captured, as the table is currently locked for writing.
The update to C will be captured, because the backup has not reached C yet.
This is an easy way to destroy referential integrity in your database.
Your backup process needs to be atomic, and transactional. If you can't shut down the entire database to writes during the backup process, you're risking disaster.
Also - there must be something wrong here. At a previous company, we were running nightly backups of a 450G Mysql DB (largest table had 150M rows), and it took less than 6 hours for the backup to complete.
Two thoughts:
Do you have a slave database? Run the backup from there - Stop replication (preventing RW risk), run the backup, restart replication.
Are your tables using InnoDB? Consider investing in InnoDBhotbackup, which solves this problem, as the backup process leverages the journaling that is part of the InnoDB storage engine.