I have some timeout problems when connecting my Azure Web App to a MongoDb hosted on a Azure VM.
2015-12-19T15:57:47.330+0100 I NETWORK Socket recv() errno:10060 A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
2015-12-19T15:57:47.343+0100 I NETWORK SocketException: remote: 104.45.x.x:27017 error:
9001 socket exception [RECV_ERROR] server [104.45.x.x:27017]
2015-12-19T15:57:47.350+0100 I NETWORK DBClientCursor::init call() failed
Currently mongodb is configured on a single server (just for dev) and it is exposed through a public ip. Website connect to it using an azure domain name (*.westeurope.cloudapp.azure.com) and without a Virtual Network.
Usually everything works well, but after some minutes of inactivity I get that timeout exception. The same will happen when using the MongoDb shell from my PC, so I'm quite sure that it is a problem on mongodb side.
I'm missing some configuration?
After some searching here my considerations:
It is usually a good practice to implement some sort of retry logic on every resource that you access on Azure (database, VM, ...). For MongoDb there is a partial implementation so you should potentially write your own. See also this issue and this.
If possible all resources on Azure should be in the same Azure Virtual Network (in this way all connections are made using Azure Private Ip instead of Public Ip. This is also useful for security reasons because you don't need to open endpoint to the public.
When deploying MongoDb on Azure try to follow the official MongoDb guidelines.
In this particular case you should set the net.ipv4.tcp_keepalive_time to a value lower than the tcp keep alive of Azure, that by default is 240 seconds. In this way the connection is closed and MongoDb driver can intercept this condition and open a new connection. If the connection is closed by Azure the driver cannot intercept it. If you want to change this setting on Azure (not recommended) you can find it inside the Public Ip configuration.
In my development environment I have set the net.ipv4.tcp_keepalive_time to 120 and now everything seems to work fine. Consider that if you host MondoDb inside an Docker container you should set this setting on the Docker host.
Here some other useful links:
http://focusmatic.tumblr.com/post/39569711018/solving-mongodb-connection-losses-on-windows-azure
https://docs.mongodb.org/ecosystem/platforms/windows-azure/
https://michaelmckeownblog.wordpress.com/2013/12/04/resolving-internal-ips-vs-dns-names-between-vms/
https://gist.github.com/davideicardi/f2094c4c3f3e00fbd490
MongoDB connection problems on Azure
MongoDB connection timeouts (Azure)
When using the C# Mongo driver we resolved this by setting the following
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(1);
Related
I am trying to create a linked service in ADF to connect to a MongoDB and I am getting 30 second server timeouts.
I have the connection string and I can connect using Compass - my computer IP address is whitelisted - but I cannot connect through Azure linked service using their MongoDB connector with this connection string.
The Azure IP address ranges for my region have been added to the whitelist as well using the latest set published by Microsoft. I am using an azurehostedingegrationruntime that is in the same Region the MongoDB is hosted in.
Problem is the MongoDB is hosted by a software house and I am not convinced they know what they are doing. SSL is NOT enabled on the MongoDB and they are using the community edition v1.34.1, database is small < 0.75Gb. The MongoDB instance is installed on a Linux box - I was looking at a selfhostedintegrationruntime but that requires a gateway installing on the server that in turn needs the use of a windows server.
If anybody has any experience of connecting to a MongoDB through Azure data factory your help would be appreciated. The only option from the Azure end is the connection string and I know that is correct as I can connect using Compass with it, but it times out when trying to connect using Azure linked service so looks like it cannot see the MongoDB.
Connects ok with the given connection using Compass, just not using Azure even though the Azure IP addresses have been whitelisted.
Solved by the software house, so they do actually know what they are doing.
Don't need to use SelfHostedIntegrateionRuntime, the AzureHostedIntegrationRuntime works just fine. Also no need to whitelist the Azure IPs - these are subject to revision anyway.
", but on the instance firewall, I have the option to allow the exact service and this should cover any future ip changes. For now, I have allowed access only for the "
Hope this makes sense.
Trying to establish a data connection using REST from QlikCloud account to a locally running application. I get an errror:
Connection to local resources is not allowed
The application running on my laptop is having a REST API enabled.
I was not able to use QlikSense Desktop so I had to login through the browser to QlikCloud.
I also tried giving the ipaddress of my laptop instead of localhost. It still throws as error:
Connection to http://<ip_address>:1000/v1/documents?uri=/csv/myFile.csv is not allowed
Should I be running my application only on a server? Any help is appreciated.
The data connections are "executed" in the context of the Qlik Engine. Which means that when specifying localhost the connection will try and load the data from the machine where the Engine is running. In Qlik Cloud case - this will be some machine in Qlik's cloud.
You can:
use QS Desktop (you've mentioned that this is not working for you)
host your service somewhere on the interned where the Engine can reach it
use some service (like ngrok) that can tunnel the local server to a public url which then access from Qlik
When trying to connect to AWS DocumentDB using mongocxx C++ driver, even after using the AWS combined pem file as a URI parameter (CA file), I get the below error of TLS handshake failed.
No suitable servers found (`serverSelectionTryOnce` set): [TLS handshake failed: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed calling ismaster on 'docdb-xxxxxxxx.ap-southeast-1.docdb.amazonaws.com:27017']: generic server error
I have masked out the full hostname of the documentdb instance. I am using the connection URI method mentioned in http://mongocxx.org/mongocxx-v3/configuration/
// 2) Using the URI
auto client2 = mongocxx::client{uri{"mongodb://host1/?tls=true&tlsAllowInvalidCertificates=true&tlsCAFile=/path/to/custom/cert.pem"}};
I am using mongocxx 3.4.2 and libmongoc 1.16.2
I have tried this connection with the Node.js driver and it is able to connection. Any ideas what could be wrong?
I was trying to connect to my DocumentDB cluster on AWS via an external app like TablePlus and I had the same error:
No suitable servers found
(`serverSelectionTryOnce` set): [Failed to resolve 'docdb-1984-08-10-12-14-15.cluster-boogeyman.xy-central-99.docdb.amazonaws.com']
What I tried next is to:
open all sort of incoming traffic in the security group assigned to my cluster
made sure that "Encryption-at-rest" (in Advanced Settings) is disabled while creating the cluster
I still got this error. What I discovered next is that:
Trying to connect to an Amazon DocumentDB cluster directly from a public endpoint, such as your laptop or local development machine, will fail. Amazon DocumentDB is virtual private cloud (VPC)-only and does not currently support public endpoints. Thus, you can't connect directly to your Amazon DocumentDB cluster from your laptop or local development environment outside of your VPC.
Please read the AWS connection troubleshooting section. To connect to an Amazon DocumentDB cluster from outside an Amazon VPC, you can use an SSH tunnel.
Our dev ops team have whitelisted my home ip address so that I can connect to our Postgres database on Azure. I am able to connect to our Azure database due to this.
Today I set up a VM in order to run Docker. I am running a container for RStudio which is an app that, among many other things, allows me to connect to our database using ODBC.
After configuring the odbcinst and odbc.ini files I believe that those are configured correctly because when I try to connect I get the following error:
Error: nanodbc/nanodbc.cpp:983: 00000: FATAL: SSL connection is required. Please specify SSL options and retry.
Thus I think that my odbc set up is correct because this error suggests my connection setting are fine, it's just that Azure will not allow it without SSL.
Searching that error message took me to this SO post with the following accepted answer:
By default, Azure Database for PostgreSQL enforces SSL connections between your server and your client applications to protect against MITM (man in the middle) attacks. This is done to make the connection to your server as secure as possible.
Although not recommended, you have the option to disable requiring SSL for connecting to your server if your client application does not support SSL connectivity. Please check How to Configure SSL Connectivity for your Postgres server in Azure for more details. You can disable requiring SSL connections from either the portal or using CLI. Note that Azure does not recommend disabling requiring SSL connections when connecting to your server.
My question is, if I am already able to connect to our database outside of my VM due to my home IP being whitelisted and just using a Postgres Driver with Dbeaver SQL client, is there anything I can do to connect from within my VM?
I can get my VMs ip address but I suspect (am not sure) if sending hat to our developers to whitelist would work?
Is there a prescribed course of action here?
I added this parameter to my .odbc.ini file and was able to connect:
sslmode=require
From Azure Postgres documentation, this parameter may take on different permutations depending on the context
"for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations"
I installed oracle-jdbc thin driver to connect with On prem oracle DB but when I test the connection I get network adapter error
I tried the changing the host but still same
When running the pipeline from GCS-BQ I getting network port error. Can we change the VPC the pipeline is running on ?
Regarding the oracle db connection error, is the db available on the public network for connection? Currently wrangler service in Cloud Data Fusion cannot talk to the on-prem db over a private connection and we are actively working towards it.
However if the db is available on the public network then it seems like the issue with the oracle db configurations. Can you please take a look at this answer and see if it helps - Oracle SQL Developer: Failure - Test failed: The Network Adapter could not establish the connection?
Also are you able to connect to the oracle db through some other query tool such as SqlWorkbench?
Breaking down your question:
1. Connecting to on-prem databases
It is possible nowadays to connect to on-premise databases. Make sure you created an interconnect between the on-prem network and the network used by Data Fusion instance and make sure you applied the right firewall rules (seems you are getting firewall issues by the logs). I suggest trying to connect directly in the database first to confirm that the network setup works.
2. Change network configurations on the Data Fusion job.
You can specify parameters for your job. There are options to change the network and subnetwork that the job will be executed under Configure > Compute config > Customize option. If you use shared VPC you can also specify the Host project.