AWS RDS instance created from snapshot very slow - postgresql

I have restored a snapshot of a PostgreSQL instance as a new instance with exactly the same configuration as the original instance. However, running queries takes much longer on the new instance. A query that takes less than 0.5 ms to execute on the original instance, takes over 1.2 ms on the new one. A nightly Python script that runs in 20 minutes on the old instance is now taking over an hour with the new one. This has been going on for several days now.

I run VACUUM(ANALYZE, DISABLE_PAGE_SKIPPING); after our nightly snapshot restores for our Staging DB to get everything running smoothly again

Unfortunately this is normal but it should go away after a while.
Snapshots are stored on S3, and when you create a new EBS volume with one, the volume only pulls in blocks of data as they are requested, causing degraded performance until the whole volume is initialized. See these AWS docs for confirmation.
Those docs suggest using dd to force all the data to load, but on RDS you have no way to do that. You might want to try SELECTing everything you can instead, although that will still miss some things (like indexes).

Related

wal-e/wal-g any benefit for simple backup and restore via S3

I'm using AWS RDS and have a need to replicate "database_a" in an RDS instance to "database_a" in a different RDS instance. The replication only needs to be once every 24 hours.
I'm currently solving this with pg_dump and pg_restore but am wondering if there is a better (ie faster/more efficient) way I can go about things.
Using wal-e/g and RDS, is it at all possible for my use case to simply push the latest changes from the last say 24 hours? The 2 RDS cannot speak to each other so all connection would be by S3. I'm not clear what the docs mean by 'When uploading backups to S3, the user should pass in the path containing the backup started by Postgres:' - does this mean i can create a pg backup to my EC2 and then point wal-g at this backup?
Finally, is it at all possible to just use wal-e/g for complete backups (ie non incremental) just as i am doing now with pg_dump/pg_restore and in doing so would I see a speed improvement by switching?
Thanks in advance,
In a word, yes.
On a system using dump/restore, you're consuming a lot more CPU and network resources (therefore costs) which you could reduce notably by using the WALs for incremental backups, and only doing an image perhaps once a week. This is especially true if your database is mostly data that doesn't change. It might be incorrect if your database is not growing but is made of records that are updated many times per 24 hours (e.g. stock prices).
Once you are publishing WALs to S3 frequently, then you'll have a far more up to date backup than nightly backups.
When publishing WALs you can recover to any point in time
WAL-E and WAL-G both have built in encryption
There is also differential backup support, but not something I've played with

RDS instance unusably slow after restoring from snapshot

Details:
Database: Postgres.
Version: 9.6
Host: Amazon RDS
Problem: After restoring from snapshot, the database is unusably slow.
Why: Something AWS calls the "first touch penalty". When a newly restored instance becomes available, the EBS volume attachment is complete but not all the data has been migrated to the attached EBS volume from S3. Only after initially "touching" the data will RDS realize the data isn't on the EBS volume and it needs to pull it from S3. This completely destroys our performance. We also cannot use dd or fio to pre-touch the data because RDS does not allow access to the mounted EBS volumes.
What I've done: Contact AWS support. They acknowledged that it's a problem, that they are working on it and that the only solution is to select * from all tables.
Why I still need help: The select * strategy did speed things up (I selected everything from the public schema), but not as much as is needed. So I read up on how postgres stores data to disk. There's a heck of a lot on disk that wouldn't be "touched" by a simple select from user-defined tables.
My question: Being limited to only SQL queries/functions and not having direct access to the underlying disk, what are the best sql statements I can use to "touch" as much as possible on the disk in order to get it loaded on the EBS volume from S3?
My suggestion would be to manually trigger a vacuum analyze, this will do a full table scan of each table within scope to update the planner with fresh statistics. You can scope this fairly easily to only a certain schema, the database in question and the Postgres schema for example could help keep total time down if you have multiple databases within the one host.
The operation is rather time consuming and I'm not aware of a good way to parallelize it. There is also the vacuumdb utility but this just runs a query with a vacuum statement in it.
Source: I asked RDS support this very question a few days ago.
[1] https://www.postgresql.org/docs/9.5/static/sql-vacuum.html
edit: will reformat later, on mobile

Orientdb delete empty cluster automatically

Is there any way to delete in Orientdb (distributed) empty cluster automatically to keep a small amount of files and db more clean?
According to the documentation, there seems to be nothing automatic that responds to your needs. Maybe a way to try to do it, you might write a Java function that checks which clusters are empty, and deletes them. Then this function could be performed or hands every few time or scheduled with some scheduler.

Fastest way to restore sql dump to RDS

I am trying to restore large *.sql dump (~4 GB) to one of my DB on RDS. Last time tried to restore it using Workbench and it took about 24+ hours until the whole process is complete.
I wonder if there is a quicker way to do this. Please help and share your thoughts
EDIT: i have my sql dump on my local computer by the way.
At the moment i have 2 options in mind:
Follow this link
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html (with low confidence)
dump the DB and compress it, and then upload the compressed dump to one of my EC2 instance, and then SSH to my EC2 instance and do
mysql> source backup.sql;
I prefer the second approach (simply because i have more confidence in that), as well as it would fastened the upload time since the entire dump is first uploaded, un-compressed and finally restored.
My suggestion is to take table-wise backup of large tables and restore them by disabling indexes. Which inserts records quickly (at more than double speed) and simply enable the indexes after restore completes.
Before restore command:
ALTER TABLE `table_name` DISABLE KEYS;
After restore completes:
ALTER TABLE `table_name` ENABLE KEYS;
Also add these extra commands at the top of the file to avoid a great deal of disk access:
SET FOREIGN_KEY_CHECKS = 0;
SET UNIQUE_CHECKS = 0;
SET AUTOCOMMIT = 0;
And add these at the end:
SET UNIQUE_CHECKS = 1;
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
I hope this will work, thank you.
Your intuition about using an EC2 intermediary is correct, but I really think the biggest thing that will benefit you is actually being inside AWS. I occasionally perform a similar procedure to move the contents of a 6GB database from one RDS DB instance to another.
My configuration:
Source DB instance and Target DB instance are in the same region and availability zones. Both are db.m3.large.
EC2 instance is in the same region and availability zone as the DB instances.
The EC2 instance is compute optimized. (I use c3.xlarge but would recommend the c4 family if I were to start again from scratch).
From here, I just use the EC2 instance to perform a very simple mysqldump from the source instance, and then a very simple mysqlrestore to the target instance.
Being in the same region and availability zones really make a big difference, because it reduces network latency during the procedure. The data transfer is also free or near-free in this situation. The instance class you choose for both your EC2 and RDS instances is also important -- if you want this to be a fast operation, you want to be able to maximize I/O.
External factors like network latency and CPU can (and in my experience, have) bottleneck the operation if you don't provide enough resources. In my example with the db.m3.large instance, the MySQLDump takes less than 5 minutes and the MySQLRestore takes about 15 minutes. I've attempted to restore to a db.m3.medium RDS instance before and the restore time took a little over an hour because the CPU bottlenecked -- it wasn't able to keep up with the EC2 instance. Back when I was restoring from my local machine, being outside of their network caused whole process to take over 4 hours.
This EC2 intermediary shouldn't need to be available 24/7. Turn it off or terminate it when you're done with it. I only have to pay for an hour of c3.xlarge time. Also remember that you can scale your RDS instance class up/down temporarily to increase resources available during your dump. Try to get your local machine out of the equation, if you can.
If you're interested in this strategy, AWS themselves has provided some documentation on the matter.
Command is always fast than Workbench.
Try to use this command to restore your database.
To Restore :
mysql -u root -p YOUR_DB_NAME < D:\your\file\location\dump.sql
To Take Dump :
mysqldump -u root -p YOUR_DB_NAME > D:\your\file\location\dump.sql

Meteor app as front end to externally updated mongo database

I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.