How to connect AWS aurora (posgresSQL) using prisma - postgresql

I am working with nest.js to build an API . I created a serverless RDS aurora for postgresSQl to use it as a database.
This is my aurora(postgresSQL) database instance (Connectivity and Security)
This is my database Configuration
This is my security group detail
Then I try connect by using endpoint,database,user etc, by using prisma in nest.js :
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = "postgresql://postgres:password#med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com:5432/Medi?schema=public&ssl=true"
}
But when I run this command:
npx prisma migrate dev --name init
I got an error like this:
Error: P1001: Can't reach database server at `med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com`:`5432`
Please make sure your database server is running at `med.cluster-cnonikf1pbgi.ap-southeast-1.rds.amazonaws.com`:`5432`.

I was able to connect directly to my Aurora Cluster without the need of a special gateway or ec2 instance. This worked for me:
Make sure you have "Public access" set to "Publicly accessible".
You should see this option when created the db but you can also modify it once the db has already been created by going to RDS -> Databases -> Select a db instance and not the cluster (the cluster does not seem to provide this option) -> Click "Modify" button in top right -> scroll down to the "Connectivity" Section -> Expand it and you'll see the option to change this setting.
Ensure the VPC "security group" that you have assigned to your DB grants external access to your DB. The same "Connectivity" section from step 1 also shows the VPC security group that your DB is using. Take note of it's name. You can view the details of your security group by going to the "VPC" service config page: VPC -> security groups -> click on your security group -> examine the inbound rules -> if needed create a new rule by click in "edit inbound rules" -> add rule. If you want to give access to just your IP you can choose "My IP", which will prefill your current IP address.
Some resources I found helpful:
Connecting from internet into VPC
Trouble Shooting Connectivity

You cannot connect to a server less aurora cluster outside of the VPC it is running in. You tried to access the DB from your local machine, right?
For local development you must create an EC2 instance in the same VPC of the aurora cluster and connect with SSH to the EC2 instance to connect then to the database. To you local database management tools you can also setup SSH port forwarding.

Related

How to Connect to Cloud SQL Through Kubernetes

This is driving me crazy, been trying to get this to work for 3 days now: I'm trying to connect a kubernetes deployment to my Cloud SQL database in GCP.
Here's what I've done so far:
Set up the cloud SQL proxy to work as a sidecar in my deployment
Created a GKE service account and attached it to my deployment
Bound the GKE service account to my GCP service account
Edited to the service account (to what I can tell) is owner permission
Yet what I run the deployment in GKE I still get:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
How can I fix this? I can't find any documentation on how to set up the service account to have the correct permissions with Cloud SQL or how to debug this issue. Every single tutorial I can find ends with "bind your service account" and then stops. Nothing that describes what permissions are needed, and nothing about how to actually connect to the DB from my code (how would my code talk to the proxy?).
Please help
FINALLY got it to work!
Two major pieces that the main article on this (cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) glosses over:
Properly setting up workload identity, for which I found these links to be very helpful:
a) https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
b) https://www.youtube.com/watch?v=l-nws1e4B8M
To connect to the DB you have to have your code use the DB host 127.0.0.1

configuring Airflow ssh connection in google cloud composer

I'm trying to configure a SSH connection from Airflow UI on google cloud composer environment to an on premise posgresql server
Where I should store my private key ?
How to pass to SSH connection config the private key location ?
First, you will need to add an SSH connection under:
Airflow -> Admin -> Connections -> Connection Type (SSH)
That will allow you to use this connection in an operator to access the remote instance. Add your key to the Extra field (check key_file & host_key).
Documentation here: https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html
Adding the file under the same GCS bucket having the dags will make it reachable by the Airflow workers. You can have a new directory under gads and name it keys if you want.
Then you will need to design your pipeline (dag) to be able to get your private key from the remote instance.
You can use the SSHExecuteOperator or any other operator based on your design.
Check this question for more helpful details:
Airflow: How to SSH and run BashOperator from a different server

Local Postgres database to Google Cloud PostgreSQL Github

I would like to build a Google Cloud PostgreSQL database using the instructions here
I was able to successfully create the Postgres databases with appropriate tables and views locally.
What do I need to do in order to get the data on Google Cloud PostgreSQL? My goal is to have remote access to this data.
You have 2 options, The first one is use the Cloud SQL proxy as is described here. As the shared links say, the Cloud SQL Proxy provides secure access to your instances without the need for Authorized networks or for configuring SSL.
On the other hand, the second option is only to configure access to your instance under Authorized networks using or not SSL. The complete steps are listed here
You could connect to Cloud SQL from a local test environment using cloud sql proxy. See quickstart-proxy-test.
The workflow is:
Your Application(Running Locally) => cloud sql proxy (Running locally) => GCP remote Cloud SQL service

Cross-region access to database on AWS

I have created a database(postgres) on AWS in region1. Now I want a developer from region2 to help me on the database located in region1.
I know AWS as default do not permit cross-region access. But is it possible to give access to the database for the developer in region2? If yes, how?
Yes you can. Do as below:
- On region1: follow link to public you RDS
- On region2: add NAT for your VPC to make internet accessing.
So I illustrate the flow as:
[region2: your_app -> NAT] ---(internet)--> [region1: Internet_gateway -> bastion_instance (optional) -> RDS]
With newly update, you can also use AWS Direct Connect Gateway to connect 2 VPC cross-region for the same aws account. Ref

How to access Mysql installed on my google cloud instance via Mysql workbench

I have Mysql installed on google cloud instance and its running fine.
Earlier i had a separate google cloud sql instance ,but due to performance issues i installed mysql on my google cloud instance.Iam currently running the database from my google cloud instance.
The issues is that when it was a seperate sql instance i could access the database from Mysql Workbench.
But now that i have it installed on my google cloud instance,i can not access it from workbench.
Is there a way i can access it from my workbench.
Please advise and help
I assume that you have created user in the cloud MySQL instance by giving current public IP. Once you done with it go to the MySQL workbench and click on little plus icon. Then you get a window like below. You can give any name to the database. For the host name you must provide host address relevant to your MySQL instance. Once you done with give a username. To enter the password you must click on the Store in Vault enter it. Once you complete click on TestConnection. If it gives successful message then your connection is done. If not you must recheck inputs most input your public IP, because sometimes this change even after one or two hours. No need of filling Default Schema field. This might be helpful for your work.