How to set up VPC network peering from multiple App Engine projects to Mongo Atlas - mongodb

I have an App Engine app, which connects securely to Mongo Atlas via a network peering connection which is all working fine.
I have come to want to make the app multi-region, which means creating multiple projects and therefore reproducing the various GCP infrastructure, including the peering connection. However when reproducing this connection, I cannot due to the IP conflict at the Mongo Atlas side between the two "default" VPC in each project.
I can create the VPC network peering in the GCP end OK, sharing the "default" VPC and setting the same Mongo project/network IDs. The default VPC has ranges for each region , e.g. us-west1=10.138.0.0/20, us-west2=10.168.0.0/20 (my original app region), and us-west4=10.182.0.0/20 - the 2nd region I am setting up.
At the Mongo DB end, their CIDR block is fixed at 192.168.0.0/16 and cannot be changed. But when I enter the new GCP project ID and "default" VPC, it throws this error:
Error trying to process asynchronous operation: An IP range in the peer network (10.138.0.0/20) overlaps with an IP range (10.138.0.0/20) in an active peer (peer-ABCXYZ) of the local network.
I understand that the IP ranges can't overlap as there would be routing ambiguity. So I'd like to know how to resolve this and connect from both projects.
I noticed that the error was about 10.138 which is us-west1 region, which I'm not even using. So is there a way to limit each VPC peering to only share the region for the project? If I could do that for each, there would be no overlap.
Mongo DB has a document about this problem, but this only discusses an AWS solution and only from their perspective, not saying how to set up the other end.
https://docs.atlas.mongodb.com/security-vpc-peering/#network-peering-between-an-service-vpc-and-two-virtual-networks-with-identical-cidr-blocks
GCP has a document about the problem, but doesn't seem to offer a resolution, just "you can't do this"
https://cloud.google.com/vpc/docs/vpc-peering#overlapping_subnets_at_time_of_peering
I'm guessing I will need to create a new VPC perhaps with region-limited subnets and only share that VPC? I had a look at "Create VPC network" but it got complex pretty quickly.
What I want is something like:
Project A, us-west2=10.168.0.0/20 <==> Mongo Atlas 192.168.0.0/16
Project B, us-west4=10.182.0.0/20 <==> Mongo Atlas 192.168.0.0/16
This question is similar, but there is no specific instructions (as the OP didn't want the second connection anyway) Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer
Update
I have since found one of the reasons this became a problem is because when originally setting up the first app 2 years ago, I just used the "default" VPC which itself defaults to "auto mode" which automatically creates subnets for all regions present and future. This can be a time-saver, but GCP recommends not to use this in production - for many reasons including my problem! If you want more control over the subnets and avoiding conflicts etc, they recommend you use a "custom mode" VPC where you have to define all the subnets yourself.
In my case I didn't need this super VPC of all possible regions in the world, but just one region. So now I will have to convert it to custom-mode and prune back the other regions I'm not using in this project, to be able to resolve the overlap (even if I do use a single-region subnet in another project, I still need to remove them from the original project to avoid the conflict).

You are right, if you use default VPC, you have VPCs in all regions and the peering failed because of the overlap.
There is 2 solutions:
Create a custom VPC in each region/project to create a clean peering
Or (my favorite), create a shared VPC and add all the region/project to the host project. At the end, it's the same project, but only in multiregion, sharing the VPC layer make a lot of sense.

Guillaume's answer is correct, but I thought I'd add my specific working recipe including how I avoided the conflict without having to reconfigure my original app.
I was going to convert my original app's auto-mode VPC into a custom one, then remove the regions I'm not using (all but us-west2). I practiced this in a different project and seemed to work quickly and easily, but I wanted to avoid any disruption to my production app.
After researching the IP ranges used by the auto-mode VPC, I realised I can just create a new VPC in my second region using any spare "local" IP range, as long as I avoid both the GCP auto-range of 10.128.0.0/20 (10.128.0.0 - 10.255.255.255) and the Mongo Atlas range of 192.168.0.0/16 (192.168.0.0 - 192.168.255.255), so I chose 10.1.0.0/16.
Then performed these steps:
Create custom VPC "my-app-engine" in my second region project
Add 1 subnet "my-app-engine-us-west4" region: "us-west4" 10.1.0.0/16
Add VPC Peering to this network both at GCP + Mongo Atlas and wait for it to connect
Add the subnet range 10.1.0.0/16 to Atlas Network Access > IP Access List
Re-deployed the app into this VPC with extra app.yaml settings:
network: my-app-engine
subnetwork_name: my-app-engine-us-west4
You have to specify subnetwork_name as well in the app.yaml for custom VPCs, but not for auto-mode ones.
The "IP Access List" caught me out for a while as I'd forgotten you also have to open the Mongo firewall to the new VPC Peering, and was still getting connection timeouts to the cluster, even though the peering was setup.
So now I have two VPC peerings, the original overstuffed one, and this new slim one on a custom network. I will eventually redeploy the old app in this slimmer pattern once the new region is working.

Related

Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer

I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.

How to establish peering between MongoDB Atlas and Google App Engine Standard Environment Node App

I've set up the peering connection between MongoDB Atlas and Googles "default" VPC and the connection is labeled as "active" on both ends.
The ip range of the vpc is whitelisted in MongoDB Atlas.
But my node hosted in google-app-engine still gets timed out when accessing the MongoDB.
I use the connection url of mongodb atlas for peered connection in the form of (notice the "-pri"):
mongodb+srv://<username>:<password>#<my-cluster>-pri.rthhs.mongodb.net/<dbname>?retryWrites=true&w=majority
Which part am i missing to establich the connection? Do i need a google vpc connector?
Thanks for any help!
First of all, make sure you are running M10-Cluster or above!!! VPC-peering is not available for M0/M2/M5...
And YES you do need that connector! All "Serverless"-Services from Gcloud (like GAE in standard environment) need it.
create a connector in the same region as your GAE-App following these instructions. You can find the current region of your GAE-App with gcloud app describe
your app.yaml has to point to that connector like this
app.yaml
runtime: nodejs10
vpc_access_connector:
name: projects/GCLOUD_PROJECT_ID/locations/REGION_WHERE_GAE_RUNS/connectors/NAME_YOU_ENTERED_IN_STEP_1
Go to your Atlas project, navigate to Network Access and whitelist the IP-range you set for the connector in Step 1
You may also need to whitelist the IP-range from Step 1 for the VPC-Network. You can do that in GCP by navigating to VPC-Network -> Firewall
If you have questions about how to setup the VPC-Peering between Atlas and Gcloud try this tutorial. They do it for Kubernetes-Engine (no connector needed). But adding my steps from above will hopefully do the trick.
Try Cannot connect to Mongo Atlas using VPC peering from GCP cluster and MongoDB and Google Cloud Functions VPC Peering?.
First step I suggest identifying whether you have physical connectivity (and so need to fix ip whitelist) or don't have connectivity (and need to fix peering configuration).

How to fix VPC security settings

I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.

AWS Lambda To Atlas

I want to connect my Lambda function to Mongo Atlas. It was all working fine but I needed to move my function inside a VPC so I could use redis.
Now I cannot connect to my database.
I looked at the security group on the VPC and added the mongo db port but with no joy
Also - the mongo IP Whitelist is as follows for now
0.0.0.0/0 (includes your current IP address)
Is there anything else I should try?
Thank you
I needed to move my function inside a VPC so I could use redis.
If you are:
Using dedicated MongoDB Atlas instances (i.e. not shared instances M0, M2 and M5 clusters).
And, the MongoDB Atlas deployment is hosted on AWS
Then, you could follow this instruction to set up VPC peering connection.
Please note that MongoDB Atlas supports VPC peering with other AWS VPCs in the same region. AWS does not support cross-region VPC peering. For multi-region clusters, you must create VPC peering connections per-region.
See also the tutorial shown on: Introducing VPC peering to MongoDB Atlas
struggled for days
using this tutorial
https://www.mongodb.com/blog/post/introducing-vpc-peering-for-mongodb-atlas
but it work in the end, when i found a missing step in the tutorial; when i used default VPC.
When crating custom VPC, rout table, and subnet; the subnet was what got me... auto assign IP needs to be enabled on the subnet;
PS you need at least two subnets for Lambda, so create one more if you haven't

Consistent IP Addresses for Auto Scaling / Load Balanced Instances

The Setup
ECS (Containerized) Application (Node.js, API Only)
Auto Scaling Group for ECS Container Instances
Load Balancer in front of auto scaling group
VPC covering all instances and ELB
Database hosted in another VPC, not managed explicitly (MongoDB Atlas), likely not the same region.
The Problem
I want my database to use good security policies, therefore I opt for whitelisting IPs as Atlas recommends - rather than opening up my database to the world with 0.0.0.0/0.
Each server has its own IP address, and in an autoscaling event it would need to be added by automation to the Atlas security rules (which is possible, not ideal).
How can I (using NAT Gateways? Elastic IPs?) get one IP for all of my load balanced instances.
Failed Solutions?
I tried using a NAT Gateway, essentially scenario 2 where all my of instances were in a private subnet, the NAT was in a public subnet with internet access, and the instances went through it to get to the database. This worked! Elastic IP on the NAT and I was able to authorize it on Atlas however it had weird issues where the instance wouldn't respond for 65 - 75 seconds, intermittently when pinged. I suspect this is due to the fact that it's not technically available on the internet and there's some routing that I don't fully understand happening. Once you got a 200 though everything would work fine, for a bit, then another 70 second latency and back to good again...
Really appreciate the input, have been searching for a while with no luck!
Have you tried a VPC peering connection? As long as the VPC CIDR blocks do not overlap, this is a good option because you can use security groups and private IPs between the peered VPCs.
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html