How do i connect my server to Atlas? - mongodb

Recently i decided to move my database from inside my server machine to the MongoDB Atlas service.
Atlas provides a IP Whitelist feature which i use to remotely connect to the database cluster.
Should i plug my server application to Atlas using this feature?
What happens if my server IP changes? Is it secure?

For a general information on how to connect to an Atlas deployment, please see Connect to a Cluster
For connecting using a driver, please see Connect via Driver. There is an extensive list of examples using all of the officially-supported drivers.
As mentioned in the Prerequisites section, you need to use SSL/TLS and IP whitelist to connect to your Atlas instance. This whitelist would need to be updated should your application server's IP changes.
The whitelist provides an additional security layer in addition to your username/password, since this list will essentially reject any connection not originating from a known IP address. It is strongly recommended to utilize this whitelist, and arguably the effort required to maintain the whitelist is comparably small to the security advantages it provides.

Related

Azure data factory connecting to MongoDB via linked service connection timeout

I am trying to create a linked service in ADF to connect to a MongoDB and I am getting 30 second server timeouts.
I have the connection string and I can connect using Compass - my computer IP address is whitelisted - but I cannot connect through Azure linked service using their MongoDB connector with this connection string.
The Azure IP address ranges for my region have been added to the whitelist as well using the latest set published by Microsoft. I am using an azurehostedingegrationruntime that is in the same Region the MongoDB is hosted in.
Problem is the MongoDB is hosted by a software house and I am not convinced they know what they are doing. SSL is NOT enabled on the MongoDB and they are using the community edition v1.34.1, database is small < 0.75Gb. The MongoDB instance is installed on a Linux box - I was looking at a selfhostedintegrationruntime but that requires a gateway installing on the server that in turn needs the use of a windows server.
If anybody has any experience of connecting to a MongoDB through Azure data factory your help would be appreciated. The only option from the Azure end is the connection string and I know that is correct as I can connect using Compass with it, but it times out when trying to connect using Azure linked service so looks like it cannot see the MongoDB.
Connects ok with the given connection using Compass, just not using Azure even though the Azure IP addresses have been whitelisted.
Solved by the software house, so they do actually know what they are doing.
Don't need to use SelfHostedIntegrateionRuntime, the AzureHostedIntegrationRuntime works just fine. Also no need to whitelist the Azure IPs - these are subject to revision anyway.
", but on the instance firewall, I have the option to allow the exact service and this should cover any future ip changes. For now, I have allowed access only for the "
Hope this makes sense.

how to connect MongoDB Atlas to GCP(Google Cloud Platform)?

I try to connect my app that is hosted on google cloud platform(gcp) app Engine to my Mongo Atlas DB.
And Mongo wants me to whitelist the gcp app ip.
But gcp doesn't have a static IP for me to whitelist.
I want to make sure I apply security best practices, and as far as I understand whitelisting my DB for all the ips is not secures. So how can I do it without opening all ips ?
You have 2 solutions
You can grant the App Engine IP ranges. But it's not secured as described in the documentation:
From this example, we see that both the 8.34.208.0/20 and 8.35.192.0/21 IP ranges can be used for App Engine traffic. Other queries for any additional netblocks may return additional IP ranges.
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
You can perform VPC peering. This required several things
Have a paid subscription to Mongo Atlas
Create a {peering between Mongo Atlas and your project](https://docs.atlas.mongodb.com/security-vpc-peering/)
Create a serverless VPC connector and add it to your App Engine to allow it to reach private IP on the VPC (and peering attached to the VPC, like your Mongo Atlas DB)
You have the option of reserving a static IP while creating a VM.
On the"create instance" page, scroll to "networking" you are presented with options for your
I. Internal IP
II. External IP
If you are running M10-Cluster (or higher) on Atlas, VPC-Peering is your way to go. I'd recommend trying this tutorial. They're explaining what CIDR-ranges (what you referred to as IPs) to whitelist.
One thing to notice here, they are using GCPs Kubernetes Engine. With App Engine there is a little extra effort as it is one of GCPs "Serverless"-Solutions, which is the reason why you should not use static IPs or anything like that. You will need to connect your App to the VPC-Network via a Connector:
Create a connector in the same region as your GAE-App following
these instructions. You can find out the current region of your
GAE-App with gcloud app describe. Just give the connector the range
10.8.0.0 for now (/28 is added automatically). Remember the name
you gave it.
Depending on your environment your app has to point to that connector. In NodeJS its your app.yaml file and it looks similar to this:
runtime: nodejs10
vpc_access_connector:
name: projects/GCLOUD_PROJECT_ID/locations/REGION_WHERE_GAE_RUNS/connectors/NAME_YOU_ENTERED_IN_STEP_1
Go to your Atlas project, navigate to Network Access and whitelist
the CIDR-range you set for the connector in Step 1
You may also need to whitelist the CIDR-range from Step 1 for the
VPC-Network. You can do that in GCP by navigating to VPC-Network ->
Firewall

Allow load balanced instances to connect single compute instance postgresql server

I am looking for GCP networking best practice, where I can allow connection of auto-scaled instances to Postgresql server installed on separate instance.
So far I tried whitelisting load-balancer IP within firewall and postgresql config file, but failed.
Any help or pointer is highly appreciated.
The load-balancer doesn't process information by itself, it just redirects Frontend addresse(s) and manage the requests with Instance Groups.
That instance group should manage the HTTP requests and connect with the database instance.
The load-balancer is used to dynamically distribute (or even create additional instances) to handle the requests over the same Frontend address.
--
So first you should make it work with a regular instance, configure it and save the instance template. Then you can proceed with creating an instance group that can be managed by a load-balancer.
EDIT - Extended the answer from my comment
"I don't think your problem is related to Google cloud platform now. If you have a known IP address for the PostgreSQL server (connect using an internal network IP address so it doesn't change), then make sure your auto-balanced instances are in the same internal network, use db's internal IP and connect to it."

Mongo lab statement regarding internal networks

I'm not sure how to phrase this question or even if it's relevant here.
I'm researching a solution to move our in-house MongoDB installation to a cloud-based db as a service solution in Mongo lab.
The company has stated here http://docs.mlab.com/security/#network that if I deploy the DB in my region (I use google cloud)
When you connect to your mLab database from within the same datacenter/region, you communicate over your cloud hosting provider’s internal network.
How is that statement possible?
When I create a DB at Mongo lab I get an external URL to connect to
ds021984.mlab.com -> 104.154.103.88 instead of an internal host name 10.x.x.x
So how can that address be external thus effecting my latency deeply?
Am I missing something ? How is that statement possible?
The only time you can use the internal IP to address a VM in GCP is if that VM is in the same network resource (and hence, the same GCP account). GCP is smart enough to know if the external IP being addressed is a GCP address, and will route the traffic such that it does not leave the region. This is pretty evident when you ping an external IP from another VM in the region, you'll typically get sub-millisecond response times.

How can one connect from Heroku to a firewalled host to get data from MongoDb?

I am currently developing a service application that pulls data from Mongo and returns it to consumers. There is a layer of authentication involved and I am using Heroku to host the service. Mongo was being hosted on MongoLabs, but there were some significant performance concerns and so we have moved to hosting Mongo on one of our cloud servers. We want to be able to secure access to Mongo using a firewall, white-listing the ip address of the service app on Heroku.
There are a couple of issues with this.
Issues
Well, at least these are main ones...
Heroku, while providing some nice features like easily managing cluster settings, s/w upgrades, etc., draws ip addresses from a pool. While the dns value of an application's url may not change, the underlying ip address can and will change.
to be better secured, mongo-server01 is placed behind a firewall that requires rules to be added using static ip addresses to allow access.
Since Heroku can't provide static ip addresses, we need to consider options for how Heroku can access mongo-server01 while still protecting the data it hosts.
Static IP addresses for outbound requests
There seem to be a couple of options, specifically for Heroku. Fixie and QuotaGuard Static both seem to serve that function, but these seem to be geared toward HTTP and HTTPS communication only (perhaps not even HTTPS).
Mongo doesn't use HTTP, it uses its own network protocol over port 27017, by default
https://groups.google.com/forum/embed/#!topic/mongodb-user/eX_RIv2cZVw
Does this mean these proxies won't work for calls to Mongo? In theory, there doesn't seem to be any reason that a proxy is only for HTTP or HTTPS requests. That being said, there doesn't seem to be any way to get in to these Heroku plugins and configure the proxy to use a different port or to handle Mongo's particular protocol.
If we could get into the proxy, perhaps we could put an additional set of ssh keys in place so the ssl tunnel chain could continue on to mongo-server01. But there doesn't see to be any way to ssh to these proxies or access configuration through the plugin dashboards.
The question (finally!)
How can one connect from Heroku to a firewalled host to get data from MongoDb? Are there proxies that can be used to achieve this?
The simple approach. Won't work because Heroku applications don't use static ip addresses.
Using a proxy. The Heroku proxy plugins don't know how to proxy mongodb protocol. Can't install ssh keys on proxy for ssh tunneling.
What can be done to get a connection without opening up the Mongo server to the world?
I spoke with the folks at QuotaGuard and they do have something that does the trick.
we offer a SOCKS proxy which should do the trick as it proxies at the TCP layer
https://devcenter.heroku.com/articles/quotaguardstatic#socks-proxy-setup
I did need to make a simple change to bin/qgsocksify
#SOCKS_DIR="$(dirname $(dirname $(readlink -f ${BASH_SOURCE[0]})))/vendor/dante
SOCKS_DIR="${HOME}/vendor/dante"
After that, the proxy worked like a charm.