AWS CloudFormation: Apply RDS DB Snapshot without recreation DB Instance - aws-cloudformation

Is it possible to apply DB Snapshot to existing DB using CloudFormation in one step?
I know it is possible if you run your CloudFormation twice - first time to rename existing resource and second time reverting the name back and set DBSnapshotIdentifier: !Ref DBSnapshot variable.
But it take to much time, is it possible to do it in one step?
The main goal - update database using snapshot from another database (for example apply STG db to DEV to have similar data)

Related

how to migrate clustered rds(postgres) to different account?

I am migrating from AccountA(source) to AccountB(Target), same region.
I ran templates so AccountB already has a RDS cluster but with no data. The db instance id is exactly same as what I have on the sourcing account.
**My goal is to have the same endpoint as before since we're retiring AccountA completely and I don't want to change codes side for the updated endpoint. **
I took a snapshot of the cluster(writer instance) then copy snapshot with a KMS key, shared it with AccountB. All good. Now, from the AccountB(target), copied snapshot and attempted to restore. I thought I could restore directly into the RDS cluster but I see that's not doable with restore as it always creates a new instance.
Then, I renamed the existing empty RDS cluster to something else to free up the DB instance ID, then renamed the temp cluster to the same name. It worked but this seems not efficient.
What's is the best practice for the RDS data migration ?
Clustered RDS ( writer - reader ) and Cross Account
I didn't create a snapshot for Reader. Will it be synced from the writer automatically once I restore?
Your experience is correct -- RDS Snapshots are restored as a new RDS instance (rather than loading data into an existing instance).
By "endpoint", if you are referring to the DNS Name used to connect to the database, then the new database will not have the same endpoint. If you want to preserve an endpoint, then you can create a DNS Name with a CNAME record that resolves to the DNS Name of the database. That way, you can change the CNAME record to point to a different endpoint without needing to change the code. (However, you probably haven't done this, so you'll need to change the code to point to the new DNS Name anyway, so it's the same amount of work.)
You are correct that you do not need to snapshot/copy the Readers -- you can simply create them from the new database. You will need to 'create' the Readers on the new database after the restore, since the Snapshot simply contains data for the main database.

Azure DevOps Yaml - Update dynamic # of DBs

Currently, I am using DevOps Pipelines YAML to run a flyway cli to perform migrations to a single Database. When code gets pushed, yaml triggers and runs flyway which updates DB.
We are planning to create multiple instances of this DB. However, the DB instances are created dynamically. We plan to store connection strings in a master DB + Key Vault.
Is it possible to achieve the following?
Code gets commit
YAML queries master DB, and gets X number connection strings
YAML loops X, and runs flyway for each connection string
All DBs gets updated
I do not think there is such a way. An alternative I can think of is create a Console Application that does the querying + calling flyway, and YAML just calls this in the build server.

AWS DMS "Load complete, replication ongoing" not working MongoDB to DocDB

I am trying to make a PoC for MongoDB to DocDB migration with DMS.
I've set up a MongoDB instance with some dummy data and an empty DocDB. Source and Target endpoints are also set in DMS and both of them are connecting successfully to my databases.
When I create a migration task in DMS everything seems to be working fine. All existing data is successfully replicated from the MongoDB instance to DocDB and the migration task state is at "Load complete, replication ongoing".
At this point I tried creating new entries in existing collections as well as creating new empty collections in MongoDB but nothing happens in DocDB. If I understand correctly the replication should be real time and anything I create should be replicated instantly?
Also there are no indication of errors or warnings whatsoever... I don't suppose its a connectivity issue to the databases since the initial data is being replicated.
Also the users I am using for the migration have admin privileges in both databases.
Does anyone have any suggestions?
#PPetkov - could you check the following?
1. Check if right privileges were assigned to the user in the MongoDB endpoint according to https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html.
2. Check if replicate set to capture changes was appropriately configured in the MongoDB instance.
3. Once done, try to search for "]E:" or "]W:" in the CloudWatch logs to understand if there are any noticeable failures/warnings.

Create an RDS/Postgres Replica in another AWS account?

I have an AWS account with a Postgres RDS database that represents the production environment for an app. We have another team that is building an analytics infrastructure in a different AWS account. They need to be able to pull data from our production database to hydrate their reports.
From my research so far, it seems there are a couple options:
Create a bash script that runs on a CRON schedule that uses pg_dump and pg_restore and stash that on an EC2 instance in one of the accounts.
Automate the process of creating a Snapshot on a schedule and then ship that to the other accounts S3 bucket. Then create a Lambda (or other script) that triggers when the snapshot is placed in the S3 bucket and restore it. Downside to this is we'd have to create a new RDS instance with each restore (since you can't restore a Snapshot to an existing instance), which changes the FQDN of the database (which we can mitigate using Route53 and a CNAME that gets updated, but this is complicated).
Create a read-replica in the origin AWS account and open up security for that instance so they can just access it directly (but then my account is responsible for all the costs associated with hosting and accessing it).
None of these seem like good options. Is there some other way to accomplish this?
I would suggest to use AWS Data Migration Service It can listen to changes on your source database and stream them to a target (https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html)
There is also a third-party blog post explaining how to set this up
https://medium.com/tensult/cross-account-and-cross-region-rds-mysql-db-replication-part-1-55d307c7ae65
Pricing is per hour, depending on the size of the replication EC2 instance. It runs in the target account, so it will not be on your cost center.

Move RDS instance from EC2-Classic to VPC

I am currently migrating my production system from EC2-Classic to VPC platform.
All is done except for RDS instance, which is still in EC2-Classic.
My original plan was to do migration with some downtime: shutdown all instances, take database snapshot, create new instance in VPC from this snapshot (RDS "Restore snapshot" feature).
Unfortunately when I tried to do this I realized that I cannot restore to the type of instance I want.
When I click "Restore" Amazon offers me only a limited number of options:
Expensive db.m3, db.r3 instances
Previous generation db.t1, db.m1, db.m2 instances
Ideally I'd like to create db.t2 instance, is it possible to do that somehow?
Also, is there a way to migrate with zero downtime? So far I've found nothing in Amazon docs.