I am trying to set up a process in which a postgres staging database is populated with production data.
I have some working implementation with pg_dump and pg_restore but I was wondering if something in RDS itself is possible.
We have nightly snapshots on our production database. My goal would be RDS takes the latest database snapshot, migrates it to an existing database and does this on some scheduled cadence (1/week or something like that).
Is this possible to configure in the console? If not are there some some combination of lambda/cloud formation that can do this?
My goal would be RDS takes the latest database snapshot, migrates it to an existing database
RDS never loads a snapshot into an existing database. It always creates an entirely new database server/cluster from a snapshot.
Is this possible to configure in the console? If not are there some some combination of lambda/cloud formation that can do this?
You would have to write some code that creates a new staging database server from the production snapshot, and deletes your current server.
If you are using CloudFormation, then it could manage this for you if you specify the DBSnapshotIdentifier parameter. You would have to modify that parameter each week, and then update your CloudFormation stack.
Related
I was trying to figure-out how we can schedule to refresh the materialized view on azure postgres database single server which is in azure cloud, one solution is to use pg_cron extension, but it
seems it is only available on azure flexible postgres database server and not on azure postgres database single server, I did not get any other option available, any suggestion in this regard will be really helpful.
I did not find any postgres scheduler extension for the db hosted on Azure, so created one microservice to schedule the db functions.
Example Link
I have a Postgres database created on Heroku but looks like it's spun up on AWS based on the server name. Unfortunately, I can't tell if it's an Aurora or RDS instance.
I'm trying to create some data pipelines on Azure Data Factory to do some ETL work moving Blob files onto the Postgres Database but am having a tough time setting it up. The default Postgres option isn't working - looks like DataFactory tries to make a JDBC connection.
I'm stuck and am clueless as to how I should set up the connection.
I have created a AWS RDS Postgresql db instance in one VPC and I need to "move" it to another VPC.
I created a snapshot of the original db instance.
Re-created it in a new VPC (using terraform).
How can I retrieve one particular database from my initial db instance snapshot into the new DB Instance?
When you restore snapshot all databases are restored.
If you want to copy particular database, AWS service Database Migration Service can be used if postgres version is higher than 9.4.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html
If postgres version is lower, then only manual export/import of database can be used. Another instance where export files will be stored is needed.
I have a Postgres instance on AWS from which snapshots are taken daily. I have manually restored the latest snapshot to another instance for reporting purpose,
but I'm wondering if it is possible to automate this so I can restore the snapshots daily?
You can schedule the restoration CLI command to get executed on particular event or at particular time.
aws rds restore-db-instance-from-db-snapshot --db-instance-identifier mynewdbinstance --db-snapshot-identifier mydbsnapshot
I have two databases on Amazon RDS, both Postgres. Database 1 and 2
I need to restore an instance from a snapshot of Database 1 for my Staging environment. (Database 2 is my current Staging DB).
However, I want the data from a few of the tables in Database 2 to overwrite the tables in the newly restored snapshot. What is the best way to do this?
When restoring RDS from a Snapshot, a new database instance is created. If you only wish to copy a portion of the snapshot:
Restore the snapshot to a new (temporary) database
Connect to the new database and dump the desired tables using pg_dump
Connect to your staging server and restore the tables using pg_restore (most probably deleting any matching existing tables first)
Delete the temporary database
pg_dump actually outputs SQL commands that are then used to recreate tables and restore data. Look at the content of a dump to understand how the restore process actually works.
I hope this still works for someone else.
With my team we faced a similar issue. We also had 2 Postgres databases and we also just needed to backup some tables from db1 to db2.
What we did is to use a lambda function using Python (from AWS lambda ofc) that connected to both databases and validates if db1.table1 has the same data as db2.table1, if not, then the lambda function should write the missing data from db1.table1 into db2.table1. The approach of using lambda was because we wanted to automate the process due to the main db (let's say db1) is constantly being updated. In addition, it allowed us to only backup our desired tables (let's say 3 tables out of 10), instead of backing up the whole database.
Note: Maybe you want to do these writes using temporary tables to avoid issues with any constraints you have in your tables.