I created a datashare in Serverless cluster.
Now I want to add that datashare in Provisioned cluster.
Simplest way is to create a new database using below command.
CREATE DATABASE temp_db FROM DATASHARE temp_share OF NAMESPACE '<producer-namespace>';
But I want to add that DATASHARE to my existing dev database.
How can I achieve that?
Related
I'm currently using a provisioned Redshift cluster, and managing database migrations with Flyway. I'm thinking on migrating to Redshift Serverless, but I'm not sure if can still use Flyway to manage the migrations.
Already added the rule in my security group to allow my IP (I'm trying to run Flyway migrations locally), and also have the Publicly accessible parameter set to On, following the steps in this document, and using the endpoint given by the workspace.
I am looking for a way to manage schema changes to my AWS Aurora Postgres instance.
My whole AWS stack is set up using a Cloudformation template which is used to automatically deploy the stack when a change is detected in the source control. The Cloudformation template is built, a change set is prepared and finally excecuted on the stack.
I was hoping that the table definition of my Aurora instance could go inside the Cloudformation template somehow, so the schema migrations could be a part of the change set. Is this possible?
Note, I have seen this recommendation: https://aws.amazon.com/blogs/opensource/rds-code-change-deployment/
For anything custom like that use a Custom Resource Lambda that you can include in your Cloud Formation stack. The Lambda will need a layer for your postgress driver and it needs to include the migration script in the Lambda.
See the answer at this link, you will get 3 different options how you can trigger the Lambda.
Is it possible to trigger a lambda on creation from CloudFormation template
We already have a cluster and instance of Aurora PostgreSql in abc region. Now as part of disaster recovery strategy, we are trying to create a read replica in a xyz region.
I was able to create it manually by clicking on "Add Region" in AWS web console. As explained here.
As part of it, following as been created.
1. A global database to the existing cluster
2. Secondary region cluster
3. Secondary region instance.
Everything is fine. Now I have to implement this through cloud formation script.
My first question is, can we do this through Cloud formation script without losing data if primary cluster and instance already created ?
If possible, please share aws doc for cloud formation scripts.
Please see the other post on this subject: CloudFormation templates for Global Aurora Database
The API that is required for setting up the GlobalCluster is AWS::RDS::GlobalCluster and this is currently not listed in CloudFormation documentation.
I was able to do the same using Terraform and that is documented for PostgreSQL here: Getting Aurora PostgreSQL Global Database setup using Terraform
We are planning to use Kube for Postgres deployments. Our applications will be microservices with separated schema (or logical database). For security sake, we'd like to have separate users for each schema/logical_db.
I suppose that the db/schema&user should be created by Kube, so the application itself does not need to have access to DB admin account.
In Stolon it seems there is just a possibility to create a single user and single database and this seems to be the case also for other HA Postgres charts.
Question: What is the preferred way in Microservices in Kube to create DB users?
When it comes to creating user, as you said, most charts and containers will have environment variables for creating a user at boot time. However, most of them do not consider the possibility of creating multiple users at boot time.
What other containers do is, as you said, have the root credentials in k8s secrets so they access the database and create the proper schemas and users. This does not necessarily need to be done in the application logic but, for example, using an init container that sets up the proper database for your application to run.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers
This way you would have a pod with two containers: one for your application and an init container for setting up the DB.
I use CloudFormation's AWS::RDS::DBCluster resource to create my Aurora MySQL database cluster.
My question is, has anyone created stored procedures as well as events in Aurora MySQL via CloudFormation? Is that even possible?
Delivering these via CloudFormation would allow me to recreate the infrastructure without deploying the stored procedures and events separately.
There's no way to configure stored procedures and events with the AWS::RDS::DBCluster CloudFormation resource directly.
My suggestion would be to provision an AWS::EC2::Instance containing a UserData script that installs the mysql client, then executes the contents of a user-provided MySQL script creating events/stored-procedures on the newly-created DB instance.