Cross-region access to database on AWS - postgresql

I have created a database(postgres) on AWS in region1. Now I want a developer from region2 to help me on the database located in region1.
I know AWS as default do not permit cross-region access. But is it possible to give access to the database for the developer in region2? If yes, how?

Yes you can. Do as below:
- On region1: follow link to public you RDS
- On region2: add NAT for your VPC to make internet accessing.
So I illustrate the flow as:
[region2: your_app -> NAT] ---(internet)--> [region1: Internet_gateway -> bastion_instance (optional) -> RDS]
With newly update, you can also use AWS Direct Connect Gateway to connect 2 VPC cross-region for the same aws account. Ref

Related

How to connect AWS aurora (posgresSQL) using prisma

I am working with nest.js to build an API . I created a serverless RDS aurora for postgresSQl to use it as a database.
This is my aurora(postgresSQL) database instance (Connectivity and Security)
This is my database Configuration
This is my security group detail
Then I try connect by using endpoint,database,user etc, by using prisma in nest.js :
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = "postgresql://postgres:password#med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com:5432/Medi?schema=public&ssl=true"
}
But when I run this command:
npx prisma migrate dev --name init
I got an error like this:
Error: P1001: Can't reach database server at `med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com`:`5432`
Please make sure your database server is running at `med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com`:`5432`.
I was able to connect directly to my Aurora Cluster without the need of a special gateway or ec2 instance. This worked for me:
Make sure you have "Public access" set to "Publicly accessible".
You should see this option when created the db but you can also modify it once the db has already been created by going to RDS -> Databases -> Select a db instance and not the cluster (the cluster does not seem to provide this option) -> Click "Modify" button in top right -> scroll down to the "Connectivity" Section -> Expand it and you'll see the option to change this setting.
Ensure the VPC "security group" that you have assigned to your DB grants external access to your DB. The same "Connectivity" section from step 1 also shows the VPC security group that your DB is using. Take note of it's name. You can view the details of your security group by going to the "VPC" service config page: VPC -> security groups -> click on your security group -> examine the inbound rules -> if needed create a new rule by click in "edit inbound rules" -> add rule. If you want to give access to just your IP you can choose "My IP", which will prefill your current IP address.
Some resources I found helpful:
Connecting from internet into VPC
Trouble Shooting Connectivity
You cannot connect to a server less aurora cluster outside of the VPC it is running in. You tried to access the DB from your local machine, right?
For local development you must create an EC2 instance in the same VPC of the aurora cluster and connect with SSH to the EC2 instance to connect then to the database. To you local database management tools you can also setup SSH port forwarding.

How to Manage IBM Cloud Key-Protect Instance from CLI when Private Network Only Policy is Applied?

In doing some testing of the IBM Cloud Security and Compliance items, specifically the CIS Benchmarks for Best Practices, one item I was non-compliant on was in Cloud Key protect for the Goal "Check whether Key Protect is accessible only by using private endpoints"
My Key-protect instance was indeed set to "Public and Private" so I changed it to Private. This change now requires me to manage my Key-Protect instance from the CLI.
When I try to even look at my Key-Protect instance policy from the CLI I receive the following error:
ibmcloud kp instance -i my_instance_id policies
Retrieving policy details for instance: my_instance_id...
Error while getting instance policy: kp.Error: correlation_id='cc54f61d-4424-4c72-91aa-d2f6bc20be68', msg='Unauthorized: The user does not have access to the specified resource'
FAILED
Unauthorized: The user does not have access to the specified resource
Correlation-ID:cc54f61d-4424-4c72-91aa-d2f6bc20be68
I'm confused - I am running the CLI logged, in as the tenant admin with Access policy of All resources in account (including future IAM enabled services)
What am I doing wrong here?
Private endpoints are only accessible from within IBM Cloud. If you connect from the public internet, access should be blocked.
There are multiple ways, how to work with such a policy in place. One is to deploy (a VPC with) a virtual machine on a private network. Then, connect to it with a VPN or Direct Link. Thus, your resources are not accessible from the public internet, but only through private connectivity. You could continue to use the IBM Cloud CLI, but set it to use private endpoints.

Which service account to use to connect from GKE to cloud SQL?

I'm following the instructions on how to connect from GKE to Cloud SQL: https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
It talks about YOUR-GSA-NAME. Google cloud creates "Compute Engine default service account" by default. Should I pick this one or create another service account for GKE only? What is the recommended way?
The Compute Engine default service account won't be able to connect to Cloud SQL out of the box, you'll have to add permissions to it (Cloud SQL Client role) for it to be able to connect.
I would create a new one however, as you likely don't want all GCE instances to be able to connect to Cloud SQL, and for permissions, best practice is to limit access. So just create a new SA (service account) with the Cloud SQL Client role (and any other permissions you might need GKE to access) and use that one.
This is all found in IAM -> Service Accounts in the console.

Local Postgres database to Google Cloud PostgreSQL Github

I would like to build a Google Cloud PostgreSQL database using the instructions here
I was able to successfully create the Postgres databases with appropriate tables and views locally.
What do I need to do in order to get the data on Google Cloud PostgreSQL? My goal is to have remote access to this data.
You have 2 options, The first one is use the Cloud SQL proxy as is described here. As the shared links say, the Cloud SQL Proxy provides secure access to your instances without the need for Authorized networks or for configuring SSL.
On the other hand, the second option is only to configure access to your instance under Authorized networks using or not SSL. The complete steps are listed here
You could connect to Cloud SQL from a local test environment using cloud sql proxy. See quickstart-proxy-test.
The workflow is:
Your Application(Running Locally) => cloud sql proxy (Running locally) => GCP remote Cloud SQL service

Lambda + RDS Postgres not working

I'm trying to make rds with postgres work with lambda, but no luck so far. I've read all other threads about it here, double-checked my Lambda VPC + Subnet config, it's the same as the RDS one, but still no luck connecting, what am I missing here?
Some screenshots to clarify:
Before, I enabled the Public access and I could connect through serverless offline.
Thanks!
EDIT ----
Have you verified your security group for your RDS service? It needs to allow access from the security groups given to your Lambda function. It is not enough that they are in the same VPC/subnets. The security group still needs to allow traffic on the ports for postgres (5432).
Note that for security groups you don't have to select an origin IP (which can be tricky for Lambda). But i notice you are giving your Lambda function the group sg-29aac25d. You can use that ID to give access to the RDS.
IAM policies should be irrelevant as you are authenticating against postgres. Unless your IAM doesn't allow your Lambda to execute, the problem is not IAM.