Does Cloud DNS have a peering limit? - google-cloud-dns

I am peering many Cloud DNS instances to a central hub. Is there a limit on the number of peering connections I can make? I do not see any mention of this in the documentation.

There's no limit per se, but the limit is way higher than your VPC Peerings, so is improbable that you will hit your DNS peering limit before your VPC Peering limit.
With that said, in 2 years I haven't reached that limit.

Related

In GCP how do you configure DNS peering between VPCs in different organizations?

I would like to configure DNS peering to allow my SaaS application to provide DNS records for a customer. The GCP documentation states that one of the use cases for a peering zone is for:
"a SaaS provider can give a SaaS customer access to DNS records it manages"
The issue I am facing is that, by default, a peering zone can only be associated with a project within the same organization. I would like to have the customer in Organization A create a peering zone that points to the DNS records in my SaaS Organization B. How do I configure this peering? Is there a way to grant this access through IAM policies?
I have been reading the GCP docs related to DNS peering and IAM and have been unable to find out how to make cross-organization DNS peering work.

How to set up VPC network peering from multiple App Engine projects to Mongo Atlas

I have an App Engine app, which connects securely to Mongo Atlas via a network peering connection which is all working fine.
I have come to want to make the app multi-region, which means creating multiple projects and therefore reproducing the various GCP infrastructure, including the peering connection. However when reproducing this connection, I cannot due to the IP conflict at the Mongo Atlas side between the two "default" VPC in each project.
I can create the VPC network peering in the GCP end OK, sharing the "default" VPC and setting the same Mongo project/network IDs. The default VPC has ranges for each region , e.g. us-west1=10.138.0.0/20, us-west2=10.168.0.0/20 (my original app region), and us-west4=10.182.0.0/20 - the 2nd region I am setting up.
At the Mongo DB end, their CIDR block is fixed at 192.168.0.0/16 and cannot be changed. But when I enter the new GCP project ID and "default" VPC, it throws this error:
Error trying to process asynchronous operation: An IP range in the peer network (10.138.0.0/20) overlaps with an IP range (10.138.0.0/20) in an active peer (peer-ABCXYZ) of the local network.
I understand that the IP ranges can't overlap as there would be routing ambiguity. So I'd like to know how to resolve this and connect from both projects.
I noticed that the error was about 10.138 which is us-west1 region, which I'm not even using. So is there a way to limit each VPC peering to only share the region for the project? If I could do that for each, there would be no overlap.
Mongo DB has a document about this problem, but this only discusses an AWS solution and only from their perspective, not saying how to set up the other end.
https://docs.atlas.mongodb.com/security-vpc-peering/#network-peering-between-an-service-vpc-and-two-virtual-networks-with-identical-cidr-blocks
GCP has a document about the problem, but doesn't seem to offer a resolution, just "you can't do this"
https://cloud.google.com/vpc/docs/vpc-peering#overlapping_subnets_at_time_of_peering
I'm guessing I will need to create a new VPC perhaps with region-limited subnets and only share that VPC? I had a look at "Create VPC network" but it got complex pretty quickly.
What I want is something like:
Project A, us-west2=10.168.0.0/20 <==> Mongo Atlas 192.168.0.0/16
Project B, us-west4=10.182.0.0/20 <==> Mongo Atlas 192.168.0.0/16
This question is similar, but there is no specific instructions (as the OP didn't want the second connection anyway) Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer
Update
I have since found one of the reasons this became a problem is because when originally setting up the first app 2 years ago, I just used the "default" VPC which itself defaults to "auto mode" which automatically creates subnets for all regions present and future. This can be a time-saver, but GCP recommends not to use this in production - for many reasons including my problem! If you want more control over the subnets and avoiding conflicts etc, they recommend you use a "custom mode" VPC where you have to define all the subnets yourself.
In my case I didn't need this super VPC of all possible regions in the world, but just one region. So now I will have to convert it to custom-mode and prune back the other regions I'm not using in this project, to be able to resolve the overlap (even if I do use a single-region subnet in another project, I still need to remove them from the original project to avoid the conflict).
You are right, if you use default VPC, you have VPCs in all regions and the peering failed because of the overlap.
There is 2 solutions:
Create a custom VPC in each region/project to create a clean peering
Or (my favorite), create a shared VPC and add all the region/project to the host project. At the end, it's the same project, but only in multiregion, sharing the VPC layer make a lot of sense.
Guillaume's answer is correct, but I thought I'd add my specific working recipe including how I avoided the conflict without having to reconfigure my original app.
I was going to convert my original app's auto-mode VPC into a custom one, then remove the regions I'm not using (all but us-west2). I practiced this in a different project and seemed to work quickly and easily, but I wanted to avoid any disruption to my production app.
After researching the IP ranges used by the auto-mode VPC, I realised I can just create a new VPC in my second region using any spare "local" IP range, as long as I avoid both the GCP auto-range of 10.128.0.0/20 (10.128.0.0 - 10.255.255.255) and the Mongo Atlas range of 192.168.0.0/16 (192.168.0.0 - 192.168.255.255), so I chose 10.1.0.0/16.
Then performed these steps:
Create custom VPC "my-app-engine" in my second region project
Add 1 subnet "my-app-engine-us-west4" region: "us-west4" 10.1.0.0/16
Add VPC Peering to this network both at GCP + Mongo Atlas and wait for it to connect
Add the subnet range 10.1.0.0/16 to Atlas Network Access > IP Access List
Re-deployed the app into this VPC with extra app.yaml settings:
network: my-app-engine
subnetwork_name: my-app-engine-us-west4
You have to specify subnetwork_name as well in the app.yaml for custom VPCs, but not for auto-mode ones.
The "IP Access List" caught me out for a while as I'd forgotten you also have to open the Mongo firewall to the new VPC Peering, and was still getting connection timeouts to the cluster, even though the peering was setup.
So now I have two VPC peerings, the original overstuffed one, and this new slim one on a custom network. I will eventually redeploy the old app in this slimmer pattern once the new region is working.

Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer

I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.

AWS Lambda To Atlas

I want to connect my Lambda function to Mongo Atlas. It was all working fine but I needed to move my function inside a VPC so I could use redis.
Now I cannot connect to my database.
I looked at the security group on the VPC and added the mongo db port but with no joy
Also - the mongo IP Whitelist is as follows for now
0.0.0.0/0 (includes your current IP address)
Is there anything else I should try?
Thank you
I needed to move my function inside a VPC so I could use redis.
If you are:
Using dedicated MongoDB Atlas instances (i.e. not shared instances M0, M2 and M5 clusters).
And, the MongoDB Atlas deployment is hosted on AWS
Then, you could follow this instruction to set up VPC peering connection.
Please note that MongoDB Atlas supports VPC peering with other AWS VPCs in the same region. AWS does not support cross-region VPC peering. For multi-region clusters, you must create VPC peering connections per-region.
See also the tutorial shown on: Introducing VPC peering to MongoDB Atlas
struggled for days
using this tutorial
https://www.mongodb.com/blog/post/introducing-vpc-peering-for-mongodb-atlas
but it work in the end, when i found a missing step in the tutorial; when i used default VPC.
When crating custom VPC, rout table, and subnet; the subnet was what got me... auto assign IP needs to be enabled on the subnet;
PS you need at least two subnets for Lambda, so create one more if you haven't

Consistent IP Addresses for Auto Scaling / Load Balanced Instances

The Setup
ECS (Containerized) Application (Node.js, API Only)
Auto Scaling Group for ECS Container Instances
Load Balancer in front of auto scaling group
VPC covering all instances and ELB
Database hosted in another VPC, not managed explicitly (MongoDB Atlas), likely not the same region.
The Problem
I want my database to use good security policies, therefore I opt for whitelisting IPs as Atlas recommends - rather than opening up my database to the world with 0.0.0.0/0.
Each server has its own IP address, and in an autoscaling event it would need to be added by automation to the Atlas security rules (which is possible, not ideal).
How can I (using NAT Gateways? Elastic IPs?) get one IP for all of my load balanced instances.
Failed Solutions?
I tried using a NAT Gateway, essentially scenario 2 where all my of instances were in a private subnet, the NAT was in a public subnet with internet access, and the instances went through it to get to the database. This worked! Elastic IP on the NAT and I was able to authorize it on Atlas however it had weird issues where the instance wouldn't respond for 65 - 75 seconds, intermittently when pinged. I suspect this is due to the fact that it's not technically available on the internet and there's some routing that I don't fully understand happening. Once you got a 200 though everything would work fine, for a bit, then another 70 second latency and back to good again...
Really appreciate the input, have been searching for a while with no luck!
Have you tried a VPC peering connection? As long as the VPC CIDR blocks do not overlap, this is a good option because you can use security groups and private IPs between the peered VPCs.
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html