how to migrate clustered rds(postgres) to different account? - postgresql

I am migrating from AccountA(source) to AccountB(Target), same region.
I ran templates so AccountB already has a RDS cluster but with no data. The db instance id is exactly same as what I have on the sourcing account.
**My goal is to have the same endpoint as before since we're retiring AccountA completely and I don't want to change codes side for the updated endpoint. **
I took a snapshot of the cluster(writer instance) then copy snapshot with a KMS key, shared it with AccountB. All good. Now, from the AccountB(target), copied snapshot and attempted to restore. I thought I could restore directly into the RDS cluster but I see that's not doable with restore as it always creates a new instance.
Then, I renamed the existing empty RDS cluster to something else to free up the DB instance ID, then renamed the temp cluster to the same name. It worked but this seems not efficient.
What's is the best practice for the RDS data migration ?
Clustered RDS ( writer - reader ) and Cross Account
I didn't create a snapshot for Reader. Will it be synced from the writer automatically once I restore?

Your experience is correct -- RDS Snapshots are restored as a new RDS instance (rather than loading data into an existing instance).
By "endpoint", if you are referring to the DNS Name used to connect to the database, then the new database will not have the same endpoint. If you want to preserve an endpoint, then you can create a DNS Name with a CNAME record that resolves to the DNS Name of the database. That way, you can change the CNAME record to point to a different endpoint without needing to change the code. (However, you probably haven't done this, so you'll need to change the code to point to the new DNS Name anyway, so it's the same amount of work.)
You are correct that you do not need to snapshot/copy the Readers -- you can simply create them from the new database. You will need to 'create' the Readers on the new database after the restore, since the Snapshot simply contains data for the main database.

Related

What is the process to clean up orphaned dynamic creds in Hashicorp Vault?

I am using Hashicorp Vault to generate dynamic creds in multiple database clusters. We have one database cluster that is somewhat ephemeral so on occasion it will be refreshed from another database cluster. This database cluster will be connected to via Vault dynamic creds just like the other database clusters.
I have the process to clean up the database users brought over by the backup from the source system when this cluster is refreshed but I don't know how I should handle the Vault cleanup. The database config will be the same (same host/user) but all the existing database user accounts recently created by Vault will be gone after the refresh so I don't know what I need to do to reset/clean up Vault for that database. The database system I'm using (Redshift) doesn't seem to have DROP USER ... IF EXISTS type of syntax otherwise I would simply use that in the dynamic role's revocation_statements and let it cycle out naturally that way.
So my main question is how do I reset or delete all the dynamic creds that were created for a specific database cluster in Vault if the database cluster is refreshed or no longer exists?
I figured out the answer to this and I wanted to share here in case anyone else encounters this.
The "lease revoke" documentation explains that you can use the -prefix switch to revoke leases based on a partial or full prefix match.
Using this information you can run a command similar to the following in order to force revoke existing leases for a specific role:
vault lease revoke -force -prefix database/creds/ROLE_NAME
Using the -force switch will remove the lease even if the revocation_statements fails to process (case when the database user no longer exists).
As an aside, the following command can be used to list leases and is useful to check before and after that all the leases are, in fact, revoked:
vault list sys/leases/lookup/database/creds/ROLE_NAME
This solves my problem of "how to I remove leases for orphaned Vault dynamic credentials" in cases where the target database is refreshed from a backup which is the case I am using this for.

Can you use AWS DMS to move Aurora DB from one account to another?

I am trying to migrate an Aurora cluster from one of our accounts to another. We actually do not have a lot write requests and the database itself is quite small, but somehow we decided to minimize the downtime.
I have looked into several options
Use snapshot: cut off the mutation in source DB, take snapshot, share and restore in another account. This would definitely introduce some downtime
Use Aurora cloning: cut off the mutation in source DB, clone the cluster in target account and switch to the target DB. According to AWS, the cloning is much faster than taking and restoring a snapshot, so the downtime should be shorter.
I am not sure if I can use DMS to do this as I did not find useful doc/tutorials about moving Aurora across accounts. Also, I am not sure whether DMS will sync any write requests to target DB during migration.
If DMS can not live sync, then I probably should use Bucardo to live migrate.
Looking at the docs, AWS Aurora with PostgreSQL compatibility is allowed as source & target endpoints. So, answering your question, yes it's possible.
Obviously, your source Aurora DB should be accessible from the target account. Check that the DB endpoint is public and the traffic is not restricted by ACLs rules or SGs rules.
Also, if you want to enable ongoing replication, you need to grant rds_replication (or rds_superuser) role to the source database user. Link to the docs.
We actually ended up using DMS for this migration. What we did was:
Take a snapshot of the target DB in the original account.
Share the snapshot to the target account and restore it over there. (You have to use snapshot for migrating things like triggers, custom types, sequence, etc)
Setup connections (like VPC peering or security groups) between two accounts.
Setup DMS in source account (endpoints, replication instance, task)
Write SQL to temporarily disable/delete constraints, triggers, etc which may cause error when load source data.
Using DMS to load source data and enable ongoing replication.
Enable/add constraints, triggers, etc back.
Post migration test

Create an RDS/Postgres Replica in another AWS account?

I have an AWS account with a Postgres RDS database that represents the production environment for an app. We have another team that is building an analytics infrastructure in a different AWS account. They need to be able to pull data from our production database to hydrate their reports.
From my research so far, it seems there are a couple options:
Create a bash script that runs on a CRON schedule that uses pg_dump and pg_restore and stash that on an EC2 instance in one of the accounts.
Automate the process of creating a Snapshot on a schedule and then ship that to the other accounts S3 bucket. Then create a Lambda (or other script) that triggers when the snapshot is placed in the S3 bucket and restore it. Downside to this is we'd have to create a new RDS instance with each restore (since you can't restore a Snapshot to an existing instance), which changes the FQDN of the database (which we can mitigate using Route53 and a CNAME that gets updated, but this is complicated).
Create a read-replica in the origin AWS account and open up security for that instance so they can just access it directly (but then my account is responsible for all the costs associated with hosting and accessing it).
None of these seem like good options. Is there some other way to accomplish this?
I would suggest to use AWS Data Migration Service It can listen to changes on your source database and stream them to a target (https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html)
There is also a third-party blog post explaining how to set this up
https://medium.com/tensult/cross-account-and-cross-region-rds-mysql-db-replication-part-1-55d307c7ae65
Pricing is per hour, depending on the size of the replication EC2 instance. It runs in the target account, so it will not be on your cost center.

Gradual PostgreSQL database migration from an AWS EC2 instance to Amazon's RDS

We have a quite large (at least to us) database that has over 20.000 tables, which is running in an AWS EC2 Instance, but due to several reasons, we'd like to move it into an AWS RDS instance. We've tried a few different approaches for migrating into RDS but as per the data volume involved (2TB) and RDS' restrictions (users and permissions) and compatibility issues, we haven't been able to accomplish it.
Given the above facts, I was wondering if PostgreSQL actually supports something like mapping a remote schema into a database, if that would be possible we could try to tinker individual per schema migrations and, not the whole database at once, which would actually make the process less painful.
I've read about the IMPORT FOREIGN SCHEMA feature which seems to be supported from version 9.5 and, that seems to do the trick, but is there something like that for 9.4.9?
You might want to look at the AWS Database Migration tool, and the associated Schema Migration tool.
This can move data from an existing database into RDS, and convert - or at least report on what would need to be changed - the schema and associated objects.
You can run this in AWS, point it at your existing EC2-based database as the source, and use a new RDS instance as the destination.

How to replicate MySQL database to Cloud SQL Database

I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !