Azure Postgres Flexible Server Vnet - postgresql

I have azure flexible postgres instance with vnet integration.
It work fine with my webapp integration.
However, i would to have a on-premise external connection to this pg instance.
Is there a way to do that with flexible postgres vnet ?
Thanks

I managed to get it working with vpn gateway.
I was able to connect my remote server to the virtual network through this way
https://learn.microsoft.com/en-us/azure/vpn-gateway/tutorial-site-to-site-portal

Related

How to read oracle db over vpn from azure databricks?

When I try to read a oracle table via azure databricks(I connect to vpn for accessing this db) , it shows below error
Java. Sql. Sqlrecoverableexception : IO Error : The network adapter could not establish the connection..
Do I need to specify the VPN details in databricks?
Even if you connected to VPN, the Databricks cluster that is running in the cloud couldn't reach your on-premise Oracle installation. To achieve that you need to work with your networking team on setting VPN connection between on-premise & cloud. It's described in the documentation in great details, so it doesn't make sense to repeat it here.
Hostnames which are being used in the Databricks clusters should be enabled to access on-prem resources through Endpoint Services.You can ref : link

Connectivity between Cloud Run and Cloud SQL (Internal IP)

I have created my organisation infrastructure in GCP following the Cloud Foundation Toolkit using the Terraform modules provided by Google.
The following table list the IP ranges for all environments:
Now I am in the process of deploying my application that consists of basically Cloud Run services and a Cloud SQL (Postgres) instance.
The Cloud SQL instance was created with a private IP from the "unallocated" IP range that is reserved for peered services (such as Cloud SQL).
In order to establish connectivity between Cloud Run and Cloud SQL, I have also created the Serverless VPC Connector (ip range 10.1.0.16/28) and configured the Cloud SQL proxy.
When I try to connect to the database from the Cloud Run service I get this error after ~10s:
CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: Post "https://www.googleapis.com/sql/v1beta4/projects/[my-project]/instances/platform-db/createEphemeral?alt=json&prettyPrint=false": context deadline exceeded
I have granted roles/vpcaccess.user for both the default Cloud Run SA and the one used by the application in the host project.
I have granted roles/compute.networkUser for both SAs in the service project. I also granted roles/cloudsql.client for both SAs.
I have enabled servicenetworking.googleapis.com and vpcaccess.googleapis.com in the service project.
I have run out of ideas and I can't figure out what the issue is.
It seems like a timeout error when Cloud Run tries to create a POST request to the Cloud SQL API. So it seems like the VPC connector (10.1.0.16/28) cannot connect to the Cloud SQL instance (10.0.80.0/20).
Has anyone experienced this issue before?
When you use the Cloud SQL built-in connexion in Cloud Run (but also App Engine and Cloud Function) a connexion similar to Cloud SQL proxy is created. This connexion can be achieved only on a Cloud SQL public IP, even if you have a serverless VPC connector and your database reachable through the VPC.
If you have only a private IP on Cloud SQL, you need to use the private IP to reach the database, not the built-in Cloud SQL connector. More detail in the documentation
I also wrote an article on this
If you are using a private IP, you need to check the docker bridge network's IP range. Here is what the documentation says:
If a client cannot connect to the Cloud SQL instance using private IP, check to see if the client is using any IP in the range 172.17.0.0/16. Connections fail from any IP within the 172.17.0.0/16 range to Cloud SQL instances using private IP. Similarly, Cloud SQL instances created with an IP in that range are unreachable. This range is reserved for the docker bridge network.
To resolve some of the issues, you are experiencing, follow the documentation here and post any error messages you receive, for example, you could try:
Try the gcloud sql connect command to connect to your instance. This command authorizes your IP address for a short time. You can run this command in an environment with Cloud SDK and mysql client installed. You can also run this command in Cloud Shell, which is available in the Google Cloud Console and has Cloud SDK and the mysql client pre-installed.
Temporarily allow all IP addresses to connect to an instance. For IPv4 authorize 0.0.0.0/0 (for IPv6, authorize ::/0. After you have tested this, please make sure you remove it again as it opens up to the world!
Are you using connection pools?
If not, I would create a cache of connections so that when your application needs to link to the database, it can get a temporary connection from the pool. Once the application has finished its operation, the connection returns to the pool again for later use. For this to work correctly, the connection needs to be open and closed efficiently and not waste any resources.

Google Cloud Data Fusion 1. Does not connect to oracle 2. When the pipeline is running I get 'default' network port error

I installed oracle-jdbc thin driver to connect with On prem oracle DB but when I test the connection I get network adapter error
I tried the changing the host but still same
When running the pipeline from GCS-BQ I getting network port error. Can we change the VPC the pipeline is running on ?
Regarding the oracle db connection error, is the db available on the public network for connection? Currently wrangler service in Cloud Data Fusion cannot talk to the on-prem db over a private connection and we are actively working towards it.
However if the db is available on the public network then it seems like the issue with the oracle db configurations. Can you please take a look at this answer and see if it helps - Oracle SQL Developer: Failure - Test failed: The Network Adapter could not establish the connection?
Also are you able to connect to the oracle db through some other query tool such as SqlWorkbench?
Breaking down your question:
1. Connecting to on-prem databases
It is possible nowadays to connect to on-premise databases. Make sure you created an interconnect between the on-prem network and the network used by Data Fusion instance and make sure you applied the right firewall rules (seems you are getting firewall issues by the logs). I suggest trying to connect directly in the database first to confirm that the network setup works.
2. Change network configurations on the Data Fusion job.
You can specify parameters for your job. There are options to change the network and subnetwork that the job will be executed under Configure > Compute config > Customize option. If you use shared VPC you can also specify the Host project.

Connection failure - AWS RDS and mySQL Workbench

I have setup numerous databases on AWS RDS SQL, however mySQL workbench fails to connect to the databases. I have read online that my machine IP address must be added to the security group of the database. However that option is not available.
If anyone can provide an insight into whether the security group can only be accessed with the premium plan with AWS RDS as i'm currently using the free tier
Thanks in advance
Yes, you can access AWS RDS in workbench free tier also.
you need to fix some issue before connecting via workbench.
For access AWS RDS to the remote machine, you need to give Public Accessibility 'Yes' when you create AWS RDS instance.
Also, you need to add your public IP address in AWS RDS security group with port 3306 in inbound rule.
For more details click here: https://www.serverkaka.com/2018/09/connect-aws-rds-mysql-instance-with-phpmyadmin.html

Azure VNET - Accessing VNET Resources from WebApp

I have deployed a VNET on Azure. I have also set up a Point-to-Site connection following this tutorial. I need 3 things on this Network.
VM Instance for MongoDB Docker.
WebApp API(ExpressJS) which should treat (1) as local address
Connect my Local machine to VNET to manage my VM Instance
I managed to deploy (1)
I successfully connect my machine (3) to the VPN and can access (1) on local IP 10.1.0.5:PORT using Mongo DB Management tool.
For WebApp API (2). I have followed all the necessary steps mentioned here. And Azure Portal show that the App is connected properly.
According to this video I should be able to connect the VM (1) . However I cannot access the local resources from the WebApp API (2).
My Connection String for WebApp API(2) is of the following format:
mongodb://[username]:[password]#10.1.0.5:[port]/[db-name]
What can be the possible reason?
since this seems to be specific to your setup, I would recommend reaching out to support so the support team can do a thorough investigation.
-- Anavi N [MSFT]