Cloud SQL cross-region Replica with CMEK encryption - google-cloud-sql

The Cloud SQL encryption docs (https://cloud.google.com/sql/docs/sqlserver/cmek#when_does_interact_with_cmek_keys) state:
Read replicas from a CMEK-enabled instance inherit CMEK encryption with the same Cloud KMS key as the primary instance.
At the same time:
Note: The Cloud KMS key ring location must match the region where you want to create a Cloud SQL instance. A multi-region or global region key will not work. A request for creating a Cloud SQL instance fails if the regions don't match.
From these two pieces of information one could conclude that cross region replicas are not possible
to be used alongside CMEK encryption.
However, we've labbed this through by:
creating a KMS keyring + key in europe-west3 and a Cloud SQL primary instance in europe-west3 using that key
creating a KMS keyring + key in europe-west2 and a Cloud SQL replica in europe-west2 using the key from europe-west2 (replica for the abovementioned primary)
Can we rely on what we've labbed in practice? Are the docs inaccurrate?

Answer can be found on a different doc page:
When you create a read replica of a Cloud SQL instance in the same region, it inherits the same customer-managed encryption key as the parent instance. If you create a read replica in a different region, you are given a new list of customer-managed encryption keys to select from. Each region uses its own set of keys.

Related

What is the replication method used for the GCP CloudSQL Read Replica for Postgresql

Based on the descriptions it seems that for the Read Replicas for Postgresql, the Write-Ahead Log Shipping is used.
Is it logical? I have tables without primary keys, however when I spin up a read replica I have no problems so wonder what is being used behind the scenes.
Cloud SQL "native" Postgres replicas (e.g., those you create via Cloud Console/gcloud/API) uses Postgres' built-in streaming physical replication which involves WAL segment shipping from primary to replica(s).
The needing primary keys is a constraint that exists in logical replication using an extension like pg_logical. You can configure your own replication relationships using pg_logical (as well as postgres' own built in logical decoding on versions that support it) in Cloud SQL but this isn't the same as native replication offering which is fully managed by Google.
But to answer your question directly: Cloud SQL's native read replicas, like the one you referred, uses WAL shipping (physical/streaming replication) as you suspect. Which is why your tables replicate just fine without primary keys.
SELECT * FROM pg_replication_slots;
will show you the type of replication.

Unable to share encrypted DocumentDB cluster snapshot to diffrent AWS account for same region

I am trying to copy or share the DocumentDB cluster from one AWS account to other, but the existing cluster is encrypted so I am not able to do share it with other accounts, so is there any way we can make existing cluster unencrypted and then share it to the other account?
I believe this will be as a result of the following limitation for DocumentDB (and other services).
You can't share a snapshot that has been encrypted using the default AWS KMS encryption key of the account that shared the snapshot.
When you create a snapshot make sure to select a custom encryption key, and ensure you grant access to the account you intend to share to via the key policy.
More information is available in the Sharing Amazon DocumentDB Cluster Snapshots documentation.
The recommended approach is to do a cross-region copy of your snapshot using a KMS key from the other region.
If you still want to follow the unencrypted approach, you will need to create an unencrypted cluster and restoring data from the encrypted one with a dump.

AWS Aurora RDS PostgreSql create global database for existing cluster through cloud formation script

We already have a cluster and instance of Aurora PostgreSql in abc region. Now as part of disaster recovery strategy, we are trying to create a read replica in a xyz region.
I was able to create it manually by clicking on "Add Region" in AWS web console. As explained here.
As part of it, following as been created.
1. A global database to the existing cluster
2. Secondary region cluster
3. Secondary region instance.
Everything is fine. Now I have to implement this through cloud formation script.
My first question is, can we do this through Cloud formation script without losing data if primary cluster and instance already created ?
If possible, please share aws doc for cloud formation scripts.
Please see the other post on this subject: CloudFormation templates for Global Aurora Database
The API that is required for setting up the GlobalCluster is AWS::RDS::GlobalCluster and this is currently not listed in CloudFormation documentation.
I was able to do the same using Terraform and that is documented for PostgreSQL here: Getting Aurora PostgreSQL Global Database setup using Terraform

Enable encryption on existing database - AWS RDS Postgresql

I have an AWS RDS postgresql database that was provisioned via terraform with encryption disabled: storage_encrypted = false
This database needs to be encrypted now but I can see from the docs that enabling encryption is something that can only be done during DB creation.
I was considering creating a read replica of this instance with encryption enabled and then promoting this replica to be a standalone instance and finally pointing my app to this new instance. Is there a simpler way?
One of the ways to achieve this in a non-production environment is as follows -
Stop writes on the instance, ie. stop the applications writing to the RDS tables
Create a manual snapshot of the unencrypted RDS instance
Go to Snapshots from the left panel and choose the snapshot just created
From the Actions, choose Copy snapshot option and enable encryption
Select the new encrypted snapshot
Go to Actions and select Restore snapshot
For a minimal downtime switch follow this -
https://aws.amazon.com/premiumsupport/knowledge-center/rds-encrypt-instance-mysql-mariadb/

deploying mongodb on google cloud platform?

Hello all actually for my startup i am using google cloud platform, now i am using app engine with node.js this part is working fine but now for database, as i am mongoDB i saw this for mongoDB https://console.cloud.google.com/launcher/details/click-to-deploy-images/mongodb?q=mongo now when i launched it on my server now it created three instances in my compute engine but now i don't know which is primary instance and which is secondary, also one more thing as i read that primary instance should be used for writing data and secondary for reading, now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database ?? also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
Please if any one of you think i have not done any research on this or its not a valid stackoverflow question then i am so sorry google cloud platform is very much new that's why there is not much documentation on it also this is my first time here in deploying my code on servers that's why i am completely noob in this field Thanks Anyways please help me ut of here guys.
but now i don't know which is primary instance and which is secondary,
Generally the Cloud Launcher will name the primary with suffix -1 (dash one). For example by default it would create mongodb-1-server-1 instance as the primary.
Although you can also discover which one is the primary by running rs.status() on any of the instances via the mongo shell. As an example:
mongo --host <External instance IP> --port <Port Number>
You can get the list of external IPs of the instances using gcloud. For example:
gcloud compute instances list
By default you won't be able to connect straight away, you need to create a firewall rule for the compute engines to open port(s). For example:
gcloud compute firewall-rules create default-allow-mongo --allow tcp:<PORT NUMBER> --source-ranges 0.0.0.0/0 --target-tags mongodb --description "Allow mongodb access to all IPs"
Insert a sensible port number, please avoid using the default value. You may also want to limit the source IP ranges. i.e. your office IP. See also Cloud Platform: Networking
i read that primary instance should be used for writing data and secondary for reading,
Generally replication is to provide redundancy and high availability. Where the primary instance is being used to read and write, and secondaries act as replicas to provide a level of fault tolerance. i.e. the loss of primary server.
See also:
MongoDB Replication.
Replication Read Preference.
MongoDB Sharding.
now when i will query my database should i provide secondary instance url and for updating/inserting data in my mongodb database should i provide primary instance url otherwise which url should i use for CRUD operations on my mongodb database
You can provide both in MongoDB URI and the driver will figure out where to read/write. For example in your Node.js you could have:
mongodb://<instance 1>:<port 1>,<instance 2>:<port 2>/<database name>?replicaSet=<replica set name>
The default replica set name set by Cloud Launcher is rs0. Also see:
Node Driver: URI.
Node Driver: Read Preference.
also after launcing this do i have to make any changes in any conf file or in any file manually or they already done that for me ?? Also do i have to make instance groups of all three instances or not ??
This depends on your application use case, but if you are launching through click and deploy the MongoDB config should all be taken care of.
For a complete guide please follow tutorial : Deploy MongoDB with Node.js. I would also recommend to check out MongoDB security checklist.
Hope that helps.