In GCP how do you configure DNS peering between VPCs in different organizations? - google-cloud-dns

I would like to configure DNS peering to allow my SaaS application to provide DNS records for a customer. The GCP documentation states that one of the use cases for a peering zone is for:
"a SaaS provider can give a SaaS customer access to DNS records it manages"
The issue I am facing is that, by default, a peering zone can only be associated with a project within the same organization. I would like to have the customer in Organization A create a peering zone that points to the DNS records in my SaaS Organization B. How do I configure this peering? Is there a way to grant this access through IAM policies?
I have been reading the GCP docs related to DNS peering and IAM and have been unable to find out how to make cross-organization DNS peering work.

Related

Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer

I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.

Google cloud SQL access from multiple VPC

I'm trying to create GCP postgreSQL instance and make it accessible from multiple VPC networks with in one project.
We have VMs in 4 GCP regions. Each region has it's own VPC network and all are peered. But when I create SQL instance I can map its private IP only to one VPC, other don't have access to it.
Is it any steps to follow which will allow to access from multiple VPCs to one SQL instance?
When you configure a Cloud SQL instance to use private IP, you use private services access. Private services access is implemented as a VPC peering connection between your VPC network and the Google services VPC network where your Cloud SQL instance resides.
That said, currently your approach is not possible. VPC network peering has some restrictions, one of which is that only directly peered networks can communicate with each other- transitive peering is not supported.
As Cloud SQL resources are themselves accessed from ‘VPC A’ via a VPC network peering, other VPC networks attached to ‘VPC A’ via VPC network peering cannot access these Cloud SQL resources as this would run afoul of the aforementioned restriction.
On this note, there’s already a feature request for multiple VPC peerings with Cloud SQL VPC.
As a workaround, you could create a proxy VM instance using Cloud SQL proxy. See 1 and 2. For example, the proxy VM instance could be placed in the VPC to which your Cloud SQL instances are attached (VPC A, for example) and it would act as the Cloud SQL Proxy. VM instances in other VPCs connected to VPC A via VPC network peering could forward their SQL requests to the Cloud SQL Proxy VM instance in VPC A, which would then forward the requests to the SQL instance(s) and vice versa.

Howto route traffic to GKE private master from another VPC network

Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:
balancers can't use master ip as a backend in any way
routers can't have next-hop-ip from another network
I can't (on my own) peer different VPC network with master private network
when I peer GKE VPC with another VPC, those routes are not propagated
Any solution here?
PS: Besides creating a standalone proxy or using third-party router...
I have multiple gcp projects, kube clusters are in separate project.
This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.
For cross-project VPC peering, you need to set up a VPC Network Peering.
I want my CI (which is in different project) to be able to access private kube master.
For this, each GKE private cluster has Master Authorized Networks, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.
If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.
If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "unify" the source of your requests to match the authorized addresses.
Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:
There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.
not supported by google, workarounds exists (but they are dirty):
https://issuetracker.google.com/issues/244483997 , custom routes export/import has no effect
Google finally added custom routes export to VPC peering with master subnet. So the problem is now gone, you can access private master from different VPC or through VPN.

How to fix VPC security settings

I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.

Access Redshift cluster deployed in a VPC

I have my Redshift cluster deployed in a VPC inside private subnets . I need to allow an IP address to access the cluster from outside the VPC . To add that IP as a whitelist and access the cluster I tried the below .
Created an inbound rule in the security group which is attached to the redshift cluster . Added the ip-address/32 as source , port 5439 , protocol tcp , type redshift.
Added the redshift cluster in the public subnet .
I did check in https://forums.aws.amazon.com/thread.jspa?threadID=134301 . He faced the same issue too .
The steps I tried didn't work . Appreciate any suggestion which can make that IP address to access the cluster.
Thanks in advance.
As the second step you did, I assume you've already put the Redshift cluster to public subnet in your VPC, then make sure your networkACL allows ingress port 5439 and egress ephemeral ports.
I think you need to make your redshift cluster "publicly accessible".
After that, just modify your associated VPC security group to allow access from specific IP addresses, and you should be able to connect to the cluster from outside the VP.
AWS forum
AWS documentation
If the IP address which is outside the VPC of Redshift is in your AWS account, or in an other account; the VPC peering between two VPC can be an option.
If you peer two VPCs; one with Redshift and the other is the VPC of the other IP address, then it is possible two enable network traffic between two
You should enable traffic by routing tables entries for new IP ranges too.
And the security group entries should be added into Redshift's Inbound rules