RDS instance unusably slow after restoring from snapshot - postgresql

Details:
Database: Postgres.
Version: 9.6
Host: Amazon RDS
Problem: After restoring from snapshot, the database is unusably slow.
Why: Something AWS calls the "first touch penalty". When a newly restored instance becomes available, the EBS volume attachment is complete but not all the data has been migrated to the attached EBS volume from S3. Only after initially "touching" the data will RDS realize the data isn't on the EBS volume and it needs to pull it from S3. This completely destroys our performance. We also cannot use dd or fio to pre-touch the data because RDS does not allow access to the mounted EBS volumes.
What I've done: Contact AWS support. They acknowledged that it's a problem, that they are working on it and that the only solution is to select * from all tables.
Why I still need help: The select * strategy did speed things up (I selected everything from the public schema), but not as much as is needed. So I read up on how postgres stores data to disk. There's a heck of a lot on disk that wouldn't be "touched" by a simple select from user-defined tables.
My question: Being limited to only SQL queries/functions and not having direct access to the underlying disk, what are the best sql statements I can use to "touch" as much as possible on the disk in order to get it loaded on the EBS volume from S3?

My suggestion would be to manually trigger a vacuum analyze, this will do a full table scan of each table within scope to update the planner with fresh statistics. You can scope this fairly easily to only a certain schema, the database in question and the Postgres schema for example could help keep total time down if you have multiple databases within the one host.
The operation is rather time consuming and I'm not aware of a good way to parallelize it. There is also the vacuumdb utility but this just runs a query with a vacuum statement in it.
Source: I asked RDS support this very question a few days ago.
[1] https://www.postgresql.org/docs/9.5/static/sql-vacuum.html
edit: will reformat later, on mobile

Related

PostgreSQL: even read access changes data files disk leading to large incremental backups using pgbackrest

We are using pgbackrest to backup our database to Amazon S3. We do full backups once a week and an incremental backup every other day.
Size of our database is around 1TB, a full backup is around 600GB and an incremental backup is also around 400GB!
We found out that even read access (pure select statements) on the database has the effect that the underlying data files (in /usr/local/pgsql/data/base/xxxxxx) change. This results in large incremental backups and also in very large storage (costs) on Amazon S3.
Usually the files with low index names (e.g. 391089.1) change on read access.
On an update, we see changes in one or more files - the index could correlate to the age of the row in the table.
Some more facts:
Postgres version 13.1
Database is running in docker container (docker version 20.10.0)
OS is CentOS 7
We see the phenomenon on multiple servers.
Can someone explain, why postgresql changes data files on pure read access?
We tested on a pure database without any other resources accessing the database.
This is normal. Some cases I can think of right away are:
a SELECT or other SQL statement setting a hint bit
This is a shortcut for subsequent statements that access the data, so they don't have t consult the commit log any more.
a SELECT ... FOR UPDATE writing a row lock
autovacuum removing dead row versions
These are leftovers from DELETE or UPDATE.
autovacuum freezing old visible row versions
This is necessary to prevent data corruption if the transaction ID counter wraps around.
The only way to fairly reliably prevent PostgreSQL from modifying a table in the future is:
never perform an INSERT, UPDATE or DELETE on it
run VACUUM (FREEZE) on the table and make sure that there are no concurrent transactions

How to update a postgres database in kubernetes using scripts [CI/CD]?

I have one postgres database deployed in kubernetes attached to a pvc [with RWX access mode]. What is the right way to update (create table) the database through my CI/CD instead of logging in to the pod and running queries [Without deleting the pvc] ?
My understanding is that the background reason for your question is how to deploy DB structure changes DB onto production with minimal downtime. For that I'd go with a Blue Green Deployments 1 .
After your comments I assume that you already have running instance of PostgreSQL and would like to modify the content of the DB by altering file structure directly on a "disk" (in this case PVC).
Modifying data structure directly on disk is not the best idea if we are speaking about data integrity, etc.
The reasons for that statement are explained in this article 2. It describes how exactly postgreSQL stores data on disk.
PostgreSQL (by default) writes blocks of data (what PostgreSQL calls pages) to disk in 8k chunks.
Additionally, there is a relation between table and file_path, so postgresql knows which exactly file stores which table.
SELECT pg_relation_filepath('test_data');
pg_relation_filepath
----------------------
base/20886/186770
In this example the file /database/base/20866/186770 contains the actual data for the table test_data.
What is the right way to update the database instead of logging in to the pod and running queries
However, if you sure that you have complete set of files for the DB to operate (like the one you are using during pg_dump / pg_restore) you can try placing that data on another PVC and recreate pod, however that will still result in a downtime.
Hope that helps.

PostgreSQL: point-in-time recovery for individual database and not whole cluster

As per standard Postgres documentation
As with the plain file-system-backup technique, this method can only support restoration of an entire database cluster, not a subset.
From this, I understood that it is not possible to setup PITR for individual databases in a cluster (a.k.a. a database instance holding multiple databases).
If my understanding is incorrect, probably the next part of the question is not relevant, but if not, here it is:
I still do not get the problem in setting this up theoretically as each database is generating its own WAL archive.
The problem here is: I am in need of setting up multiple Postgres clusters and somehow I have only 2 RHEL 7.6 machines to handle this. I am trying to reduce the number of clusters on these 2 machines to only 2. I am planning to create multiple database rather than multiple instances to handle customer applications. But that means that I have to sacrifice PITS, as PITR only can be performed on the instance/cluster level and not on the database level (as per the official documentation).
Could someone please help clarifying my misunderstanding.
You are correct, you can only do PITR on a PostgreSQL database cluster, not on an individual database.
There is only one WAL stream for the complete database cluster; WAL is not split up per database.
Don't hesitate to run several PostgreSQL clusters on a single machine if that is advantageous for you.
There is little overhead in running a second database cluster. The biggest resource that is hogged by a cluster is shared buffers, but you want that to be only a fraction of the available RAM anyway. Most of the memory should be left to the filesystem cache that is shared by all PostgreSQL clusters.

wal-e/wal-g any benefit for simple backup and restore via S3

I'm using AWS RDS and have a need to replicate "database_a" in an RDS instance to "database_a" in a different RDS instance. The replication only needs to be once every 24 hours.
I'm currently solving this with pg_dump and pg_restore but am wondering if there is a better (ie faster/more efficient) way I can go about things.
Using wal-e/g and RDS, is it at all possible for my use case to simply push the latest changes from the last say 24 hours? The 2 RDS cannot speak to each other so all connection would be by S3. I'm not clear what the docs mean by 'When uploading backups to S3, the user should pass in the path containing the backup started by Postgres:' - does this mean i can create a pg backup to my EC2 and then point wal-g at this backup?
Finally, is it at all possible to just use wal-e/g for complete backups (ie non incremental) just as i am doing now with pg_dump/pg_restore and in doing so would I see a speed improvement by switching?
Thanks in advance,
In a word, yes.
On a system using dump/restore, you're consuming a lot more CPU and network resources (therefore costs) which you could reduce notably by using the WALs for incremental backups, and only doing an image perhaps once a week. This is especially true if your database is mostly data that doesn't change. It might be incorrect if your database is not growing but is made of records that are updated many times per 24 hours (e.g. stock prices).
Once you are publishing WALs to S3 frequently, then you'll have a far more up to date backup than nightly backups.
When publishing WALs you can recover to any point in time
WAL-E and WAL-G both have built in encryption
There is also differential backup support, but not something I've played with

Fastest way to restore sql dump to RDS

I am trying to restore large *.sql dump (~4 GB) to one of my DB on RDS. Last time tried to restore it using Workbench and it took about 24+ hours until the whole process is complete.
I wonder if there is a quicker way to do this. Please help and share your thoughts
EDIT: i have my sql dump on my local computer by the way.
At the moment i have 2 options in mind:
Follow this link
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html (with low confidence)
dump the DB and compress it, and then upload the compressed dump to one of my EC2 instance, and then SSH to my EC2 instance and do
mysql> source backup.sql;
I prefer the second approach (simply because i have more confidence in that), as well as it would fastened the upload time since the entire dump is first uploaded, un-compressed and finally restored.
My suggestion is to take table-wise backup of large tables and restore them by disabling indexes. Which inserts records quickly (at more than double speed) and simply enable the indexes after restore completes.
Before restore command:
ALTER TABLE `table_name` DISABLE KEYS;
After restore completes:
ALTER TABLE `table_name` ENABLE KEYS;
Also add these extra commands at the top of the file to avoid a great deal of disk access:
SET FOREIGN_KEY_CHECKS = 0;
SET UNIQUE_CHECKS = 0;
SET AUTOCOMMIT = 0;
And add these at the end:
SET UNIQUE_CHECKS = 1;
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
I hope this will work, thank you.
Your intuition about using an EC2 intermediary is correct, but I really think the biggest thing that will benefit you is actually being inside AWS. I occasionally perform a similar procedure to move the contents of a 6GB database from one RDS DB instance to another.
My configuration:
Source DB instance and Target DB instance are in the same region and availability zones. Both are db.m3.large.
EC2 instance is in the same region and availability zone as the DB instances.
The EC2 instance is compute optimized. (I use c3.xlarge but would recommend the c4 family if I were to start again from scratch).
From here, I just use the EC2 instance to perform a very simple mysqldump from the source instance, and then a very simple mysqlrestore to the target instance.
Being in the same region and availability zones really make a big difference, because it reduces network latency during the procedure. The data transfer is also free or near-free in this situation. The instance class you choose for both your EC2 and RDS instances is also important -- if you want this to be a fast operation, you want to be able to maximize I/O.
External factors like network latency and CPU can (and in my experience, have) bottleneck the operation if you don't provide enough resources. In my example with the db.m3.large instance, the MySQLDump takes less than 5 minutes and the MySQLRestore takes about 15 minutes. I've attempted to restore to a db.m3.medium RDS instance before and the restore time took a little over an hour because the CPU bottlenecked -- it wasn't able to keep up with the EC2 instance. Back when I was restoring from my local machine, being outside of their network caused whole process to take over 4 hours.
This EC2 intermediary shouldn't need to be available 24/7. Turn it off or terminate it when you're done with it. I only have to pay for an hour of c3.xlarge time. Also remember that you can scale your RDS instance class up/down temporarily to increase resources available during your dump. Try to get your local machine out of the equation, if you can.
If you're interested in this strategy, AWS themselves has provided some documentation on the matter.
Command is always fast than Workbench.
Try to use this command to restore your database.
To Restore :
mysql -u root -p YOUR_DB_NAME < D:\your\file\location\dump.sql
To Take Dump :
mysqldump -u root -p YOUR_DB_NAME > D:\your\file\location\dump.sql