I have deployed a VNET on Azure. I have also set up a Point-to-Site connection following this tutorial. I need 3 things on this Network.
VM Instance for MongoDB Docker.
WebApp API(ExpressJS) which should treat (1) as local address
Connect my Local machine to VNET to manage my VM Instance
I managed to deploy (1)
I successfully connect my machine (3) to the VPN and can access (1) on local IP 10.1.0.5:PORT using Mongo DB Management tool.
For WebApp API (2). I have followed all the necessary steps mentioned here. And Azure Portal show that the App is connected properly.
According to this video I should be able to connect the VM (1) . However I cannot access the local resources from the WebApp API (2).
My Connection String for WebApp API(2) is of the following format:
mongodb://[username]:[password]#10.1.0.5:[port]/[db-name]
What can be the possible reason?
since this seems to be specific to your setup, I would recommend reaching out to support so the support team can do a thorough investigation.
-- Anavi N [MSFT]
Related
I am trying to create a linked service in ADF to connect to a MongoDB and I am getting 30 second server timeouts.
I have the connection string and I can connect using Compass - my computer IP address is whitelisted - but I cannot connect through Azure linked service using their MongoDB connector with this connection string.
The Azure IP address ranges for my region have been added to the whitelist as well using the latest set published by Microsoft. I am using an azurehostedingegrationruntime that is in the same Region the MongoDB is hosted in.
Problem is the MongoDB is hosted by a software house and I am not convinced they know what they are doing. SSL is NOT enabled on the MongoDB and they are using the community edition v1.34.1, database is small < 0.75Gb. The MongoDB instance is installed on a Linux box - I was looking at a selfhostedintegrationruntime but that requires a gateway installing on the server that in turn needs the use of a windows server.
If anybody has any experience of connecting to a MongoDB through Azure data factory your help would be appreciated. The only option from the Azure end is the connection string and I know that is correct as I can connect using Compass with it, but it times out when trying to connect using Azure linked service so looks like it cannot see the MongoDB.
Connects ok with the given connection using Compass, just not using Azure even though the Azure IP addresses have been whitelisted.
Solved by the software house, so they do actually know what they are doing.
Don't need to use SelfHostedIntegrateionRuntime, the AzureHostedIntegrationRuntime works just fine. Also no need to whitelist the Azure IPs - these are subject to revision anyway.
", but on the instance firewall, I have the option to allow the exact service and this should cover any future ip changes. For now, I have allowed access only for the "
Hope this makes sense.
When I try to read a oracle table via azure databricks(I connect to vpn for accessing this db) , it shows below error
Java. Sql. Sqlrecoverableexception : IO Error : The network adapter could not establish the connection..
Do I need to specify the VPN details in databricks?
Even if you connected to VPN, the Databricks cluster that is running in the cloud couldn't reach your on-premise Oracle installation. To achieve that you need to work with your networking team on setting VPN connection between on-premise & cloud. It's described in the documentation in great details, so it doesn't make sense to repeat it here.
Hostnames which are being used in the Databricks clusters should be enabled to access on-prem resources through Endpoint Services.You can ref : link
I have created my organisation infrastructure in GCP following the Cloud Foundation Toolkit using the Terraform modules provided by Google.
The following table list the IP ranges for all environments:
Now I am in the process of deploying my application that consists of basically Cloud Run services and a Cloud SQL (Postgres) instance.
The Cloud SQL instance was created with a private IP from the "unallocated" IP range that is reserved for peered services (such as Cloud SQL).
In order to establish connectivity between Cloud Run and Cloud SQL, I have also created the Serverless VPC Connector (ip range 10.1.0.16/28) and configured the Cloud SQL proxy.
When I try to connect to the database from the Cloud Run service I get this error after ~10s:
CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: Post "https://www.googleapis.com/sql/v1beta4/projects/[my-project]/instances/platform-db/createEphemeral?alt=json&prettyPrint=false": context deadline exceeded
I have granted roles/vpcaccess.user for both the default Cloud Run SA and the one used by the application in the host project.
I have granted roles/compute.networkUser for both SAs in the service project. I also granted roles/cloudsql.client for both SAs.
I have enabled servicenetworking.googleapis.com and vpcaccess.googleapis.com in the service project.
I have run out of ideas and I can't figure out what the issue is.
It seems like a timeout error when Cloud Run tries to create a POST request to the Cloud SQL API. So it seems like the VPC connector (10.1.0.16/28) cannot connect to the Cloud SQL instance (10.0.80.0/20).
Has anyone experienced this issue before?
When you use the Cloud SQL built-in connexion in Cloud Run (but also App Engine and Cloud Function) a connexion similar to Cloud SQL proxy is created. This connexion can be achieved only on a Cloud SQL public IP, even if you have a serverless VPC connector and your database reachable through the VPC.
If you have only a private IP on Cloud SQL, you need to use the private IP to reach the database, not the built-in Cloud SQL connector. More detail in the documentation
I also wrote an article on this
If you are using a private IP, you need to check the docker bridge network's IP range. Here is what the documentation says:
If a client cannot connect to the Cloud SQL instance using private IP, check to see if the client is using any IP in the range 172.17.0.0/16. Connections fail from any IP within the 172.17.0.0/16 range to Cloud SQL instances using private IP. Similarly, Cloud SQL instances created with an IP in that range are unreachable. This range is reserved for the docker bridge network.
To resolve some of the issues, you are experiencing, follow the documentation here and post any error messages you receive, for example, you could try:
Try the gcloud sql connect command to connect to your instance. This command authorizes your IP address for a short time. You can run this command in an environment with Cloud SDK and mysql client installed. You can also run this command in Cloud Shell, which is available in the Google Cloud Console and has Cloud SDK and the mysql client pre-installed.
Temporarily allow all IP addresses to connect to an instance. For IPv4 authorize 0.0.0.0/0 (for IPv6, authorize ::/0. After you have tested this, please make sure you remove it again as it opens up to the world!
Are you using connection pools?
If not, I would create a cache of connections so that when your application needs to link to the database, it can get a temporary connection from the pool. Once the application has finished its operation, the connection returns to the pool again for later use. For this to work correctly, the connection needs to be open and closed efficiently and not waste any resources.
I created a Compute Engine Instance on which I am hosting my MongoDB server.
I also have a nodeJS server which currently hosted in APP Engine of the same project and in the same region.
Now I want to connect my MongoDB database with the AppEngine server.
How can I do this?
Please Guide me.
Thanks in advance.
So main question is how you're atttempting to connect from GAE to MongoDB, which is not included in your question...
This aside, you'll need the connection string, as per MongoDB documentation [1], and this doc shows how to get it [2].
Since you're running both GAE and the GCE instance running MongoDB in the same project you can use the internal IP address and you can remove the external IP address from the GCE instance to remove a potential security issue with people accessing MongoDB directly.
The connection string would be:
mongodb://[username:password#]GCE_INTERAL_IP[:port1][/[defaultauthdb][?options]]
Replace GCE_INTERAL_IP with the actual internal IP of the GCE instance running MongoDB. You can find this in the GCP console.
https://docs.mongodb.com/guides/server/drivers/#obtain-your-mongodb-connection-string
https://docs.mongodb.com/manual/reference/connection-string/#mongodb-uri
I installed oracle-jdbc thin driver to connect with On prem oracle DB but when I test the connection I get network adapter error
I tried the changing the host but still same
When running the pipeline from GCS-BQ I getting network port error. Can we change the VPC the pipeline is running on ?
Regarding the oracle db connection error, is the db available on the public network for connection? Currently wrangler service in Cloud Data Fusion cannot talk to the on-prem db over a private connection and we are actively working towards it.
However if the db is available on the public network then it seems like the issue with the oracle db configurations. Can you please take a look at this answer and see if it helps - Oracle SQL Developer: Failure - Test failed: The Network Adapter could not establish the connection?
Also are you able to connect to the oracle db through some other query tool such as SqlWorkbench?
Breaking down your question:
1. Connecting to on-prem databases
It is possible nowadays to connect to on-premise databases. Make sure you created an interconnect between the on-prem network and the network used by Data Fusion instance and make sure you applied the right firewall rules (seems you are getting firewall issues by the logs). I suggest trying to connect directly in the database first to confirm that the network setup works.
2. Change network configurations on the Data Fusion job.
You can specify parameters for your job. There are options to change the network and subnetwork that the job will be executed under Configure > Compute config > Customize option. If you use shared VPC you can also specify the Host project.