New to cloudformation. I am spawning PostgreSQL RDS instance using a aws cloudformation script. Is there a way to enable PostGIS (and other extensions) from aws cloudFormation script?
Working with PostGIS PostGIS is an extension to PostgreSQL for storing
and managing spatial information. If you are not familiar with
PostGIS, you can get a good general overview at PostGIS Introduction.
You need to perform a bit of setup before you can use the PostGIS
extension. The following list shows what you need to do; each step is
described in greater detail later in this section.
Connect to the DB instance using the master user name used to create the DB instance.
Load the PostGIS extensions.
Transfer ownership of the extensions to therds_superuser role.
Transfer ownership of the objects to the rds_superuser role.
Test the extensions.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
I'm not sure but maybe you can create a lambda function and RDS with your cloudformation and then you can invoke your lambda to do above steps. You need to try.
best,
I think this can be done with AWSUtility::CloudFormation::CommandRunner.
Basically we can run bash command with this (https://aws.amazon.com/blogs/mt/running-bash-commands-in-aws-cloudformation-templates/)
I don't think you will be able to achieve it by using cloudformation. Cloudformation is a provisioning tool not a configuration management tool.
Related
I am looking for a way to manage schema changes to my AWS Aurora Postgres instance.
My whole AWS stack is set up using a Cloudformation template which is used to automatically deploy the stack when a change is detected in the source control. The Cloudformation template is built, a change set is prepared and finally excecuted on the stack.
I was hoping that the table definition of my Aurora instance could go inside the Cloudformation template somehow, so the schema migrations could be a part of the change set. Is this possible?
Note, I have seen this recommendation: https://aws.amazon.com/blogs/opensource/rds-code-change-deployment/
For anything custom like that use a Custom Resource Lambda that you can include in your Cloud Formation stack. The Lambda will need a layer for your postgress driver and it needs to include the migration script in the Lambda.
See the answer at this link, you will get 3 different options how you can trigger the Lambda.
Is it possible to trigger a lambda on creation from CloudFormation template
Currently creating an RDS per account for several different AWS accounts. I use Cloudformation scripts for this.
When creating these databases I would like for them to have a similar structure. I created an SQL which I can successfully run manually after the script has run. I would like to however execute this automatically as part of running the script.
My solution so far is to create a EC2 instance with a dependency on the RDS to run once and then manually delete it later but this is not a suitable solution. I couldn't find any other way though?
Is it possible to run a query as part of a cloudformation script?
FYI: I'm creating a 11.5 Postgres instance.
The proper way is to use custom resources.
But this requires some new development. But if you have already EC2 instance that does populate the rds from its UserData you can automate its termination as follows:
Set InstanceInitiatedShutdownBehavior to termiante
At the end of UserData execute shutdown -h now to shutdown the instance.
Since your shutdown behavior is terminate, the instance will be automatically terminated.
We already have a cluster and instance of Aurora PostgreSql in abc region. Now as part of disaster recovery strategy, we are trying to create a read replica in a xyz region.
I was able to create it manually by clicking on "Add Region" in AWS web console. As explained here.
As part of it, following as been created.
1. A global database to the existing cluster
2. Secondary region cluster
3. Secondary region instance.
Everything is fine. Now I have to implement this through cloud formation script.
My first question is, can we do this through Cloud formation script without losing data if primary cluster and instance already created ?
If possible, please share aws doc for cloud formation scripts.
Please see the other post on this subject: CloudFormation templates for Global Aurora Database
The API that is required for setting up the GlobalCluster is AWS::RDS::GlobalCluster and this is currently not listed in CloudFormation documentation.
I was able to do the same using Terraform and that is documented for PostgreSQL here: Getting Aurora PostgreSQL Global Database setup using Terraform
I want to migrate AWS PostgreSQL to google cloud SQL. I can perform such by some basic strategy such as extract the AWS data, Create Database in GCP and Restore the extracted data in GCP. But I was wondering is there any more sophisticated way to so such as using terraform or similar.
Yes. See https://cloud.google.com/solutions/migrating-postgresql-to-gcp/
For migrating MySQL there are more options available, however at the time of the writing, these only apply to MySQL:
https://cloud.google.com/sql/docs/mysql/migrate-data
https://cloud.google.com/sql/docs/mysql/replication/replication-from-external
I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !