Configure network access to MongoDB cluster from Azure App Service - mongodb

I'm trying to configure network access of a MongoDB cluster to allow connections from an Azure App Service. I found the outbound IP addresses of my App Service in the Azure portal (see Azure docs). And entered them in the IP access list according to MongoDB Atlas docs. I appended "/32" to the IP addresses to allow only a single host (CIDR notation).
However, when trying to connect on App Service start I get an error indicating to check the IP whitelist of the MongoDB cluster.
This actually seems to be the problem, because adding 0.0.0.0/0 (allow access from anywhere) solves the problem.
What could be the problem here?
I double checked the outbound IP addresses of the Azure App Service and the IP access list from the MongoDB Cluster.
What I did was indeed the answer to another question, so I think I'm missing something...

Actually /32 is not a valid CIDR in Azure. The minimum size of a single VNET is /29.
This will restrict your range to only 3 IPs (not 8 as you would expect), as Azure will reserve the first four IPs and the last one for internal routing.
Please consider also that if you are running the MongoDB cluster inside a private network and it is not exposed externally via a network appliance (such as Application Gateway, Load Balancer, Front Door or Traffic Manager), you will need to enable VNET Integration on Azure Web App side.
If this is your case, navigate through your App in the portal and go into the "Networking" blade.
Here you can add VNET Integration, but you should consider that in this case the minimum size of your subnet can only be /28 (you cannot add a smaller subnet)

I only added the IP addresses listed in the "outbound IP addresses" property of my Azure App Service. After adding the IP addresses listed in the "Additional Outbound IP Addresses" property also the App Service connects to the MongoDB cluster successfully.
This is somewhat surprising to me because the documentation on when outbound IPs change says that the "...set of outbound IP addresses for your app changes when you perform one of the following actions:
Delete an app and recreate it in a different resource group (deployment unit may change).
Delete the last app in a resource group and region combination and recreate it (deployment unit may change).
Scale your app between the lower tiers (Basic, Standard, and Premium), the PremiumV2, and the PremiumV3 tier (IP addresses may be added to or subtracted from the set).
..."
None of the above actions happened. 🙄

Related

Azure PostgreSQL Flexible Server Network Security Group Outbound Rules

We have an Azure PostgreSQL Flexible Server on a VNET subnet which we're trying to lock down as much as possible via NSG rules.
As per the Microsoft documentation we've added rules to cover the guidance given:
High availability Features of Azure Database for PostgreSQL - Flexible
Server require ability to send\receive traffic to destination ports
5432, 6432 within Azure virtual network subnet where Azure Database
for PostgreSQL - Flexible Server is deployed , as well as to Azure
storage for log archival. If you create Network Security Groups (NSG)
to deny traffic flow to or from your Azure Database for PostgreSQL -
Flexible Server within the subnet where its deployed, please make sure
to allow traffic to destination ports 5432 and 6432 within the subnet,
and also to Azure storage by using service tag Azure Storage as a
destination.
And we have added another rule to deny all other outbound traffic to lock things down further, but in the Network Watcher Flow Logs we're seeing blocked outbound traffic to port 443 from the PostgreSQL IP address.
The IP addresses being called are associated with Akamai and Microsoft when investigated, but we're a little puzzled what they may be doing and how to add relevant rules to cover this seemingly un-documented behaviour.
A sample of the outbound IP address calls being blocked:
104.74.50.201
23.0.237.118
52.239.130.228
What are the best practices to lock things down but allow PostgreSQL to call out to what it needs to? Is there some more comprehensive documentation somewhere?
The outbound NSG rules:
We understand that there's default rules in place, but we're trying to restrict traffic further to very specific resources.
In my knowledge, recommended steps will be
Create a new Priority rule to Deny all the traffic in Inbound and Outbound. On top we can create a new rule to allow traffic.
If applications that are deployed on subnets within the virtual network, allow only those subnet range on NSG inbound rule
Example:
Deployed PostgresSQL with Vnet
Address Space: 10.1.0.0/16 and Subnet range: 10.1.0.0/24
In Inbound always allow only specific port and Destination IP addresses
If application is consuming any load balancer / Cluster ip's we should allow only those IPs on outbound rules under destination

GCP Add VPN Tunnels from one Peer VPN Gateway to another Peer VPN Gateway appears impossible; only the source VPN Gateway is available in Peer list

Within one project, I created two VPC networks, one in region us-central1 and one in region us-east1. Each has subnet 10.0.x.0/24. I know I could use VPC peering to connect these two subnets, however my goal is to verify I can setup a HA VPN connection between these two VPC networks.
Foreach VPC network I created an HA VPN gateway, named for their respective region: "vpn-gateway-central" and "vpn-gateway-east"; each has two public IP's for HA. I created two (one per each VPC) cloud routers for BGP use.
I fail when I try to create the VPN tunnels. My expectation based on available online tutorials (that have an older GCP UI) is that I create the tunnels in both directions, just like non-cloud VPN tunnels. From the central to the east, I attempt to create the tunnel in the "central" VPC, and I expect its remote peer(s) will be the set of IP's from the "east" VPC.
The GCP UI does something unexpected: It has me "SELECT PROJECT", and then it populates a drop-down for the "VPN gateway name" from which I select the peer. In this case, I would expect to see a list of VPN gateways that DO NOT exist within the VPC network from which I am starting. Thus, if I am starting from the "central" VPC network, then I expect to see the "east" VPC network in the "VPN gateway name". However, all I see is the VPN gateway name within the "central" region. The initiator and the peer IP's cannot be the same, but that is the result of making the only selection offered in the listbox "VPN gateway name".
I clearly cannot create this tunnel. Is this a bug in the new UI? Is this a beta? This GCP console UI has definitely changed from the ones I see in the online tutorials - where it appears to work (it exposes the remote VPN gateways, as one would intuitively expect, not the ones resident to the VPC network from which I am creating the tunnel).
This is my first VPN within GCP, so I'm likely missing something. However, in any case, if it's not broken, then at least it appears confusing. I will appreciate clarification/trick/workaround.
link to the GCP "Add VPN tunnels" dialogue where the unexpected list of VPN gateway names appears
I added another pic that shows the dialogue.
I didn't notice any issue regarding Cloud VPN from GCP.
To create a HA VPN between VPCs, this is the proper documentation.
In case you cannot create via UI, you can try to create it via gcloud commands, and you'll get more information about the issue that could be happening.
I recommend to check this, follow the guide and paste here the output of the gcloud commands if it fails.

Why can't App Engine connect to Compute Engine VM instance?

I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.

how to connect MongoDB Atlas to GCP(Google Cloud Platform)?

I try to connect my app that is hosted on google cloud platform(gcp) app Engine to my Mongo Atlas DB.
And Mongo wants me to whitelist the gcp app ip.
But gcp doesn't have a static IP for me to whitelist.
I want to make sure I apply security best practices, and as far as I understand whitelisting my DB for all the ips is not secures. So how can I do it without opening all ips ?
You have 2 solutions
You can grant the App Engine IP ranges. But it's not secured as described in the documentation:
From this example, we see that both the 8.34.208.0/20 and 8.35.192.0/21 IP ranges can be used for App Engine traffic. Other queries for any additional netblocks may return additional IP ranges.
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
You can perform VPC peering. This required several things
Have a paid subscription to Mongo Atlas
Create a {peering between Mongo Atlas and your project](https://docs.atlas.mongodb.com/security-vpc-peering/)
Create a serverless VPC connector and add it to your App Engine to allow it to reach private IP on the VPC (and peering attached to the VPC, like your Mongo Atlas DB)
You have the option of reserving a static IP while creating a VM.
On the"create instance" page, scroll to "networking" you are presented with options for your
I. Internal IP
II. External IP
If you are running M10-Cluster (or higher) on Atlas, VPC-Peering is your way to go. I'd recommend trying this tutorial. They're explaining what CIDR-ranges (what you referred to as IPs) to whitelist.
One thing to notice here, they are using GCPs Kubernetes Engine. With App Engine there is a little extra effort as it is one of GCPs "Serverless"-Solutions, which is the reason why you should not use static IPs or anything like that. You will need to connect your App to the VPC-Network via a Connector:
Create a connector in the same region as your GAE-App following
these instructions. You can find out the current region of your
GAE-App with gcloud app describe. Just give the connector the range
10.8.0.0 for now (/28 is added automatically). Remember the name
you gave it.
Depending on your environment your app has to point to that connector. In NodeJS its your app.yaml file and it looks similar to this:
runtime: nodejs10
vpc_access_connector:
name: projects/GCLOUD_PROJECT_ID/locations/REGION_WHERE_GAE_RUNS/connectors/NAME_YOU_ENTERED_IN_STEP_1
Go to your Atlas project, navigate to Network Access and whitelist
the CIDR-range you set for the connector in Step 1
You may also need to whitelist the CIDR-range from Step 1 for the
VPC-Network. You can do that in GCP by navigating to VPC-Network ->
Firewall

GKE Kubernetes external domain provider

I built simple cluster in GKE with two services using this tutorial
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
After finishing that I'm able to access my service using external IP address. So I bought domain for using this IP address. After setup A record in DNS settings to that IP address, domain doesn't work, it still loads and then show ERR_CONNECTION_TIMED_OUT. Do I need to do something in google console, or how I can make this IP public and accessed through domain?
Please refer to official documentation, which describes steps you need to take to configure domain names with static IP.
There are steps that you need to cover:
Go to NETWORKING section at GCP console, than VPC Network -> External IP addresses to ensure that you are running static IP address, not ephemeral one.
Go to Network services -> Cloud DNS. You need to create DNS zone, where at DNS name line you have to wright your domain name. After creation you will see Add record set, where you need to paste your external IP address.
There is also a good tutorial at YouTube with setting up custom domain on GCP. Let me know if it works for you.