MongoDB Atlas - Configure IP Whitelist when hosting on AWS S3 and Cloudfront - mongodb-atlas

How do I set the IP address Whitelist for a MongoDB Stitch application (MongoDB Atlas back-end) when the site is hosted on AWS S3 (using cloudfront)?
The site is currently working though I have never set an IP address.
I just don't want it to lose access at some point because I have failed to set the correct IP address whitelist. Perhaps it is not necessary because the cluster is already on AWS?
Thank you!

The IP Whitelist specifies IPs that the Atlas cluster will accept client requests from. Examples of clients include MongoDB Compass, the Mongo shell, and Stitch. In this case, your app only connects to Atlas indirectly through Stitch. Stitch automatically adds its own whitelist entries for itself as client.
Stitch does not restrict any cross-domain requests unless you specify Allowed Origins in the Stitch settings. Otherwise, no apps using the client SDKs would work without explicit whitelisting!
This is the control panel where you could set Allowed Origins, if you only want to be able to connect to Stitch from some specific domains:
In short, you do not need to configure the IP Whitelist to allow your site to communicate with Stitch. Everything should keep working!

Related

Postgresql requests proxied by HTTP server

I am using a mobile application that connects directly to the database instance (Postgres), as such, I have to keep the ports open for traffic that is generated from the internet (4G, mobile app).
This mobile app (QFIELD, mobile version of QGIS) has a direct connection to the database, this is the reason why the database is reachable from the internet on a public ip but this is a critical issue for the security of the data and the requests that can be sent to the database.
I would like to proxy the requests so that the database is only available to local machines and not open for connections directly.
The mobile appp would send the request to an HTTP url which would send the request to the local ip and port, this way I would avoid to have the database exposed on the internet.
Ideally, I would like to go from this app (which uses a postgres connection string to connect to the server) to an HTTP server that routes the request locally, as such:
APP connects to https://myproxy/postgres
Request is proxied to a local server
Can I do this with Apache2? Any ideas?
At the moment I cannot write a middleware that proxies requests from the APP to the local postgres.
If your application is expecting to connect directly to a PostgreSQL database and you don't want to change that then you need to connect to something that "speaks" PostgreSQL's client protocol.
You can place a proxy such as pgbouncer or pgpool in front of it, but they aren't a guarantee of greater security just by themselves. This is the same problem as with any proxy - it is just forwarding requests and responses to your actual server so any vulnerability is still exposed.
What you can do is:
restrict the number of connections at the proxy point
restrict which users can connect non-locally to your PostgreSQL cluster
restrict where they can connect from to just your proxy
restrict those users permissions within the database(s)
That last point is particularly important - assume any user account your application can be used maliciously. Restrict the account to prevent mass updating or deleting of data. Also take special care to restrict access to other users' data.
If I was forced to allow access like this, I would want one PostgreSQL user account per actual user at the very least. In practice I wouldn't get to this point with a production application.

Cloudsql access from ai-platform job

Google has nice ways to connect to cloudsql from other google services but I cannot see how to connect from ai-platform jobs. As part of our training job, we need to update our cloudsql db with metrics but the only I could get it to work is by whitelisting all IPs (don't want that!) in the cloudsql and connecting via the public IP. I don't see an option to add cloud-sql-proxy to the trainer instance. Since the IP of the trainer instance is dynamic, we cannot reliably add specific IP address to whitelist. Any other ways to handle this?
It looks like AI Platform supports VPC peering, so you should be able to connect to Cloud SQL using private IP.
Since Cloud SQL also uses VPC peering, you'll likely need to do the following to get the resources to connect:
Create a VPC to share (or use the "default" VPC)
Follow the steps here to setup VPC peering for AI Platform in your VPC.
Follow the steps here to setup a private IP for your instance in your VPC.
Since the resources are technically in different networks, you may need to export custom routes (Step #2) to allow the AI platform access to your Cloud SQL instance.
Alternatively to using private IP, you could keep using public IP w/ an IP allowlist coupled with Authorizing with SSL/TLS certificates. This still isn't as secure as using the proxy or private IP (as users are technically able to connect to your instance), but they'll be unable to interact with the database engine without the correct certificates.
Can you publish a PubSub message from within your training job and have it trigger a cloud function that connects to the database? AI Platform training seems to have IAM restrictions that I too am curious how to control.

Google Cloud SQL - PostgreSQL database connection from QGIS for third parties

I have a Google Cloud SQL PostgreSQL database in which I can connect by using SSL and by entering my IP address in allowed connection settings. However, I do not want to list all the IP addresses that is going to connect to this database (because I do not know all the IP addresses). I have around 15 people which I want them to login to my database using QGIS and they should be able to change the data as this is a research. Security is not a big issue as this database will be online for a very short period of time. What connection method can you suggest? The users are not very proficient so I need to setup everything.
I hope you're doing fine.
I would like to suggest to set the connections with the Cloud SQL proxy as it will provide the security needed without using ssl or the need of authorize any network. so basically the set up is to:
Enable the API
Install the proxy client on your local machine
Determine how you will authenticate the proxy
If required by your authentication method, create a service account
Also you can find the steps on "Connecting to Cloud SQL from external applications"
Hope this works for you as I have never used it with QGIS but I believe that as you are using a proxy it won't be hard from there to use it with QGIS as if you connected to a local server.

Recommended way to connect cloud foundry to mongodb atlas

I've got a spring boot app which is connected to mongodb atlas.
Everything is working locally.
I now want to publish this to pivotal cloud foundry.
Secure connection between PCF and atlas
In mongodb atlas I need to open up the firewall an allow certain ip numbers.
How should I configure mongodb atlas to connect to pcf in the most secure way?
Autoconfigure getting in the way
cloud foundry is overriding my connection urls to point to localhost:27017 instead of my atlas cluster.
What is the recommended way to connect to mongodb atlas?
In mongodb atlas I need to open up the firewall an allow certain ip numbers. How should I configure mongodb atlas to connect to pcf in the most secure way?
White listing IP addresses for applications that run on CF is not particularly effective. The reason it's not effective is that you don't know the IP address from which you'll be connecting, because it depends on where Diego decides to run your application. In other words, it depends on the cell where your application is told to run. To compound matters, that will change when you restart / restage your application.
Because the IP can vary, what you end up needing to do is white list all of your Cells. The problem with this and why it's not effective is that you've ended up white listing every app running on the platform.
What you can do to improve the security a bit is to make use of application security groups. ASG's can be used to limit outgoing traffic. You can also control them at the space level. That means you can configure your default running security group to not allow access to your MongoDb server, but you can allow access for individual spaces by binding an ASG to only those spaces with apps that need to talk to your MongoDb servers.
The downside of this approach is that it requires you to be a platform administrator, which means it will only work if you own your CF installation (not going to work for public providers).
More on ASG's here: https://docs.cloudfoundry.org/adminguide/app-sec-groups.html
For public providers, you can use a proxy. To make this work, you need to have your application configured to talk through a proxy when it attempts to access your Mongodb servers. You control the proxies, which have fixed IPs, so you can white list the proxies to allow access to just your app. If you don't want to run your own proxy servers, there are public proxy providers that you can use.
cloud foundry is overriding my connection urls to point to localhost:27017 instead of my atlas cluster. What is the recommended way to connect to mongodb atlas?
It's possible to disable auto configuration. One way is described in the docs here. If you include the Spring Cloud Connectors dependencies and use them manually, then the auto configuration will not run.
https://docs.cloudfoundry.org/buildpacks/java/spring-service-bindings.html#manual
The other option is to tell the Java build pack not to install the auto configuration. You can do that by setting the following environment variable for your application, either with cf set-env or via a manifest.yml file.
Ex: JBP_CONFIG_SPRING_AUTO_RECONFIGURATION='[enabled: false]'
Be careful if you do this as it will disable everything provided by the auto reconfiguration, which includes setting the "cloud" profile for your app. If you use this option to disable auto reconfiguration, you'll probably also want to set SPRING_PROFILES_ACTIVE='cloud' to manually enable the cloud profile.
I suppose your other option is to simply embrace the auto configuration. It's a little confusing / magical at first, but I've found this article to explain it very well.
https://spring.io/blog/2015/04/27/binding-to-data-services-with-spring-boot-in-cloud-foundry
Hope that helps!

Restrict access to bluemix app

I have node.js bluemix app. I don't want the bluemix app url to be publicly available. Is there a way to restrict access to bluemix app from certain IP addresses only ?
I know I can build authentication in the app itself but I am trying to avoid that.
Thanks in advance.
The Bluemix CloudFoundry platform can not restrict routes to a certain IP address. Like you said, all authentication has to be part of the application logic.
Check out the express-ipfilter npm module:
https://www.npmjs.com/package/express-ipfilter
Whitelisting certain IP addresses, while denying all other IPs:
// Init dependencies
var express = require('express')
, ipfilter = require('express-ipfilter')
, app = express.createServer()
;
// Whitelist the following IPs
var ips = ['127.0.0.1'];
// Create the server
app.use(ipfilter(ips, {mode: 'allow'}));
app.listen(3000);
As Ram mentioned you cannot restrict routes to a certain IP for a Bluemix app, but there is an alternative using IBM Containers.
You can deploy your node.js in an IBM Container (docker) and use the IBM VPN service to restrict access to your container instance to your company's VPN.
You can find more details on this service here:
https://console.ng.bluemix.net/docs/services/vpn/index.html