We have a few tables with a pretty large number of entries that sometimes need to be re-imported. Only some tables are concerned, so we don't use restore but a command similar to this:
heroku pg:psql --app ourapp HEROKU_POSTGRESQL_WHITE < data.sql
This takes roughly 30min, mainly due to data upload (about 1GB).
Until now we've put the app in maintenance mode to import the new data, but we'd like to avoid the long downtime in the future.
What would be the best way to achieve this in Heroku?
Our first thought to reduce downtime was to find a way to run the command from a server that will have much better upload speed, but it's still not perfect.
We've thought of using followers but some other tables need to be written to when users are interacting with the app, and we're not sure if the app can be told to fall back on followers even if the master db doesn't have issues.
We've also thought of entirely caching all relevant tables while we're uploading new data, and then clearing that cache, but Heroku doesn't seem to give enough control on the cache to achieve that.
Import into a temporary second table, and then drop first table, and rename second one in a transaction.
Related
I'm trying to migrate our database engine from MsSql to PostgreSQL. In our automated test, we restore the database back to "clean" state at the start of every test. We do this by comparing the "diff" between the working copy of the database with the clean copy (table by table). Then copying over any records that have changed. Or deleting any records that have been added. So far this strategy seems to be the best way to go about for us because per test, not a lot of data is changed, and the size of the database is not very big.
Now I'm looking for a way to essentially do the same thing but with PostgreSQL. I'm considering doing the exact same thing with PostgreSQL. But before doing so, I was wondering if anyone else has done something similar and what method you used to restore data in your automated tests.
On a side note - I considered using MsSql's snapshot or backup/restore strategy. The main problem with these methods is that I have to re-establish the db connection from the app after every test, which is not possible at the moment.
If you're okay with some extra storage, and if you (like me) are particularly not interested in re-inventing the wheel in terms of checking for diffs via your own code, you should try creating a new DB (per run) via templates feature of createdb command (or CREATE DATABASE statement) in PostgreSQL.
So for e.g.
(from bash) createdb todayDB -T snapshotDB
or
(from psql) CREATE DATABASE todayDB TEMPLATE snaptshotDB;
Pros:
In theory, always exact same DB by design (no custom logic)
Replication is a file-transfer (not DB restore). So far less time taken (i.e. doesn't run SQL again, doesn't recreate indexes / restore tables etc.)
Cons:
Takes 2x the disk space (although template could be on a low performance NFS etc)
For my specific situation. I decided to go back to the original solution. Which is to compare the "working" copy of the database with "clean" copy of the database.
There are 3 types of changes.
For INSERT records - find max(id) from clean table and delete any record on working table that has higher ID
For UPDATE or DELETE records - find all records in clean table EXCEPT records found in working table. Then UPSERT those records into working table.
So I'm planning an app that will involve having a master db on a server, lets say 3,000 CDs, with the columns Title, Artist, and Release Date.
1)When a user adds a CD to their collection, it will add it to the apps local SQLite DB. But lets say I spelled a CD title wrong, so I make an update to it. When the user goes to sync, how should I go about handling an updated row? Should I have a column 'IsUpdated' that is just a numeric value that increase by one every time I update that row? That way when the app sees IsUpdated on the server is larger than the local IsUpdated for that particular item, it will now to replace the contents. Does that make sense? Is it even practical? What other option would there be?
2) How would I do about handling the addition of brand new columns? Like adding a Barcode or Price? Do I just push an update for the app that adds the new columns locally, then do the same on the server, and let the rest take its run? Which would also trickle to number 1 with the syncing issue.
First you have to give more detail than that. Is the entire 3000 master list also replicated down to the remote db?
Sounds like it.
Ok so if that the case, this isn't a DB design issue so much as it is replication.
It's a bad idea to update every row in a table, especially one that makes the row longer. You'll be better off just dropping the table and recreating. <--- that's how it works in RDBMS on servers, no idea if that concept changes on a client db. And now we get into more iPhone questions of replication than simple db replication. Would it be better to just republish the app? Is the user data segregated from the server data. Can DDL be done on the local/remote tables after published?
Instead of searching the entire list for changes as you outline in #1. I would keep a dated delta table. The local app would store a last_updated_Datetime, any records in the delta table after that datetime would need to be brought down. Once downloaded the local system can determine how to apply them. Again this is inappropriate for mass changes.
I am restoring a mysql database with perl on a remote server with about 30 million records. It's taking > 2 days & looking at my network connections I am not fully utilizing my uplink bandwidth. I will need to do this at least 1x per week. Is there a way to fork a mysqldump (I'm using perl) so that I can take full advantage of my bandwidth (I don't mind if I'm choked off for a bit...I just need to get this done faster).
Can't you upload the whole dump to the remote server and start the restore there?
A restore of a mysqldump is just the execution of a long series of commands that would restore your database from scratch. If the execution path for that is; 1) send command 2) remote system executes command 3) remote system replies that the command is complete 4) send next command, then you are spending most of your time waiting on network latency.
I do know that most SQL hosts will allow you to upload a dump file specifically to avoid the kinds of restore time that you're talking about. The company that takes my money each month even has a web-based form that you can use to restore a database from a file that has been uploaded via sftp. Poke around your hosting service's documentation. They should have something similar. If nothing else (and you're comfortable on the command line) you can upload it directly to your account and do it from a shell there.
mk-parallel-dump and mk-parallel-restore are designed to do what you want, but in my testing mk-parallel-dump was actually slower than plain old mysqldump. Your mileage may vary.
(I would guess the biggest factor would be the number of spindles your data files reside on, which in my case, 1, was not especially conducive to parallelization.)
First caveat: mk-parallel-* writes a bunch of files, and figuring out when it's safe to start sending them (and when you're done receiving them) may be a little tricky. I believe that's left as an exercise for the reader, sorry.
Second caveat: mk-parallel-dump is specifically advertised as not being for backups. Because "At the time of this release there is a bug that prevents --lock-tables from working correctly," it's really only useful for databases that you know will not change, e.g., a slave that you can STOP SLAVE on with no repercussions, and then START SLAVE once mk-parallel-dump is done.
I think a better solution than parallelizing a dump may be this:
If you're doing your mysqldump on a weekly basis, you can just do it once (dumping with --single-transaction (which you should be doing anyway) and --master-data=n) and then start a slave that connects over an ssh tunnel to the remote master, so the slave is continually updated. The disadvantage is that if you want to clone a local copy (perhaps to make a backup) you will need enough disk to keep an extra copy around. The advantage is that a week's worth of (query-based) replication log is probably quite a bit smaller than resending the data, and also it arrives gradually so you don't clog your pipe.
How big is your database in total? What kind of tables are you using?
A big risk with backups using mysqldump has to do with table locking, and updates to tables during the backup process.
The mysqldump backup process basically works as follows:
For each table {
Lock table as Read-Only
Dump table to disk
Unlock table
}
The danger is that if you run an INSERT/UPDATE/DELETE query that affects multiple tables while your backup is running, your backup may not capture the results of your query properly. This is a very real risk when your backup takes hours to complete and you're dealing with an active database. Imagine - your code runs a series of queries that update tables A,B, and C. The backup process currently has table B locked.
The update to A will not be captured, as this table was already backed up.
The update to B will not be captured, as the table is currently locked for writing.
The update to C will be captured, because the backup has not reached C yet.
This is an easy way to destroy referential integrity in your database.
Your backup process needs to be atomic, and transactional. If you can't shut down the entire database to writes during the backup process, you're risking disaster.
Also - there must be something wrong here. At a previous company, we were running nightly backups of a 450G Mysql DB (largest table had 150M rows), and it took less than 6 hours for the backup to complete.
Two thoughts:
Do you have a slave database? Run the backup from there - Stop replication (preventing RW risk), run the backup, restart replication.
Are your tables using InnoDB? Consider investing in InnoDBhotbackup, which solves this problem, as the backup process leverages the journaling that is part of the InnoDB storage engine.
Hy guys, i have a postgresql 8.3 server with many database.
Actually, im planning to backup those db with a script that will store all the backup in a folder with the same name of the db, for example:
/mypath/backup/my_database1/
/mypath/backup/my_database2/
/mypath/backup/foo_database/
Every day i make 1 dump each 2 hours, overwriting the files every day... for example, in the my_database1 folder i have:
my_database1.backup-00.sql //backup made everyday at the 00.00 AM
my_database1.backup-02.sql //backup made everyday at the 02.00 AM
my_database1.backup-04.sql //backup made everyday at the 04.00 AM
my_database1.backup-06.sql //backup made everyday at the 06.00 AM
my_database1.backup-08.sql //backup made everyday at the 08.00 AM
my_database1.backup-10.sql //backup made everyday at the 10.00 AM
[...and so on...]
This is how i actually assure myself to be able to restore everydatabase loosing at least 2 hours of data.
2 hours still looks too much.
I've got a look to the postgresql pitr trought the WAL files, but, those files seem to contain all the data about all my database.
I'll need to separate those files, in the same way i do separate the dump files.
How to?
Otherwise, there is another easy-to-install to have a backup procedure that allo me to restore just 1 backup at 10 seconds earlier, but without creating a dump file every 10 seconds?
It is not possible with one instance of PostgresSQL.
You can divide your 500 tables between several instances, each listening on different port, but it would mean that they will not use resources like memory effectively (memory reserved but unused in one instance can not be used by another).
Slony will also not work here, as it does not replicate DDL statements, like dropping a table.
I'd recommend doing both:
continue to do your pg_dump backups, but try to smooth it - throttle pg_dump io bandwith, so it will not cripple a server, and run it continuously - when it finishes with the last database then immediately start with a first one;
additionally setup PITR.
This way you can restore a single database fast, but you can loose some data. If you'll decide that you cannot afford to loose that much data then you can restore your PITR backup to a temporary location (with fsync=off and pg_xlog symlinked to ramdisk for speed), pg_dump affected database from there and restore it to your main database.
Why do you want to separate the databases?
The way the PITR works, it is not possible to do since it works on the complete cluster.
What you can do in that case is to create a data directory and a separate cluster for each of those databases (not recommended though since it will require different ports, and postmaster instances).
I believe that the benefits of using PITR instead of regular dumps outweigh having separate backups for each database, so perhaps you can re-think the reasons for why you need to separate it.
Another way could be to set up some replication with Slony-I but that would require a separate machine (or instance) that receives the data. On the other hand, that way you would have a replicated system in near real-time.
Update for comment:
To recover from mistakes, like deleting a table, PITR would be perfect since you can replay to a specific time. However, for 500 databases I understand that can be a lot of overhead. Slony-I would probably not work, since it is replicating. Not sure how it handles table deletions.
I am not aware of any other ways you can go. What I would do would probably still be going for PITR and just not do any mistakes ;). Jokes aside, depending how frequently mistakes are being made this could be a solution:
Set it up for PITR
have a second instance ready on standby.
When a mistake happens, replay the restore to the point in time on the second instance.
Do a pg_dump of the affected database from that instance.
Do a pg_restore on the production instance for that database.
However, it would require you to have a second instance ready, either on the same server or a different one (different is recommended). Also, the restore time would be a bit longer since it would require you to do one extra dump and restore.
I think the way you are doing this is flawed. You should have one database with multiple schemas and roles. Then you can use PITR. However PITR is not a replacement for dumps.
I am considering log-shipping of Write Ahead Logs (WAL) in PostgreSQL to create a warm-standby database. However I have one table in the database that receives a huge amount of INSERT/DELETEs each day, but which I don't care about protecting the data in it. To reduce the amount of WALs produced I was wondering, is there a way to prevent any activity on one table from being recorded in the WALs?
Ran across this old question, which now has a better answer. Postgres 9.1 introduced "Unlogged Tables", which are tables that don't log their DML changes to WAL. See the docs for more info, but at least now there is a solution for this problem.
See Waiting for 9.1 - UNLOGGED tables by depesz, and the 9.1 docs.
Unfortunately, I don't believe there is. The WAL logging operates on the page level, which is much lower than the table level and doesn't even know which page holds data from which table. In fact, the WAL files don't even know which pages belong to which database.
You might consider moving your high activity table to a completely different instance of PostgreSQL. This seems drastic, but I can't think of another way off the top of my head to avoid having that activity show up in your WAL files.
To offer one option to my own question. There are temp tables - "temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see ON COMMIT below)" - which I think don't generate WALs. Even so, this might not be ideal as the table creation & design will be have to be in the code.
I'd consider memcached for use-cases like this. You can even spread the load over a bunch of cheap machines too.