I have a VPC network and that network has a private endpoint for my database connection. All servers are able to connect to the database without fail. However, once I turn on a cloud build (in a private pool, in the same VPC network), the cloud build system cannot seem to find or connect to my private endpoint to build out the static pages of the website.
Do I need to setup a special VPN? how can I even begin to troubleshoot this?
Sadly yes, you have to set up a VPN as described in that documentation (for GKE but the principle in the same)
In fact, you need to have a look to the underlying architecture.
From Cloud Build private pool to your VPC, a peering is created
From your VPC to Cloud SQL private IP (or GKE Private control plane, or mongo-db Atlas), a peering is created
Therefore the network architecture is the following
Cloud Build private Pool -> peering -> VPC -> peering -> Cloud SQL private IP
One of limitation of VPC peeing on GCP is the non-transitivity ( if A -> B and B -> C, then A can't reach C)
That's why a VPN is a (non glorious) solution
Related
Firstly, I have Fargate tasks in private subnets of a VPC and enable NAT Gateway to get connected with ECR for pulling the images & other on-premise servers via the internet. It works perfectly. Later I setup VPC endpoints for ECR (api & dkr), S3, Secrets, logs & remove NAT Gateway, it is working for communication with AWS Services but getting the problem for communicating with on-premise servers. So I enable NAT Gateway and then my application seems working perfectly with on-premise servers. But what I am still unclear is the communication with AWS Services (ECR, S3, Secrets and CloudWatch) happens via internet or private network with VPC endpoints? Please suggest me how to debug the communications.
Thank you for your advices in advance ~
I follow Use a private subnet with internet access & I can ssh into the tasks without VPC Endpoints & NAT gateway enabled. I cannot ssh when I try with VPC endpoints method as the communication happens via private link. I still cannot ssh with VPC endpoints method and NAT Gateway enabled.
--I think I should able to ssh as NAT Gateway is enabled now.-
The VPC endpoints you are creating are specifically "Interface Endpoints". When you create an interface endpoint, AWS
will add an elastic network interface (ENI) to your specified subnets and assign it a private IP address in your subnet's address space. In general,
you'll also tell AWS to add a DNS entry for that ENI which resolves the service's domain name against that IP (insetad
of the public IP). You can disable this, but it kind of defeats the purpose.
This effectively means that anytime you try to resolve the hostname for that service, it should resolve to your ENI's
IP address and thus go over privatelink. However, it is important to note that you need to configure your CLI/SDK for the region your ENI is in. Otherwise, it may use the generic DNS entry (which may point to us-east-1 specifically). That will resolve just fine (thanks to your NAT Gateway), but if you are running in another region, your traffic may route unexpectedly over the internet.
All of this is independent of SSH. Remember, VPC Interface Endpoints are only used to create a private IP address that
can be used to route to AWS services. If you are trying to SSH into a Fargate task, that task just needs to be routable. In this particular case, your Fargate tasks are running in your VPC, and are apparently directly routable. No NAT Gateway or interface endpoints should be necessary to reach them.
I'm trying to create GCP postgreSQL instance and make it accessible from multiple VPC networks with in one project.
We have VMs in 4 GCP regions. Each region has it's own VPC network and all are peered. But when I create SQL instance I can map its private IP only to one VPC, other don't have access to it.
Is it any steps to follow which will allow to access from multiple VPCs to one SQL instance?
When you configure a Cloud SQL instance to use private IP, you use private services access. Private services access is implemented as a VPC peering connection between your VPC network and the Google services VPC network where your Cloud SQL instance resides.
That said, currently your approach is not possible. VPC network peering has some restrictions, one of which is that only directly peered networks can communicate with each other- transitive peering is not supported.
As Cloud SQL resources are themselves accessed from ‘VPC A’ via a VPC network peering, other VPC networks attached to ‘VPC A’ via VPC network peering cannot access these Cloud SQL resources as this would run afoul of the aforementioned restriction.
On this note, there’s already a feature request for multiple VPC peerings with Cloud SQL VPC.
As a workaround, you could create a proxy VM instance using Cloud SQL proxy. See 1 and 2. For example, the proxy VM instance could be placed in the VPC to which your Cloud SQL instances are attached (VPC A, for example) and it would act as the Cloud SQL Proxy. VM instances in other VPCs connected to VPC A via VPC network peering could forward their SQL requests to the Cloud SQL Proxy VM instance in VPC A, which would then forward the requests to the SQL instance(s) and vice versa.
Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:
balancers can't use master ip as a backend in any way
routers can't have next-hop-ip from another network
I can't (on my own) peer different VPC network with master private network
when I peer GKE VPC with another VPC, those routes are not propagated
Any solution here?
PS: Besides creating a standalone proxy or using third-party router...
I have multiple gcp projects, kube clusters are in separate project.
This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.
For cross-project VPC peering, you need to set up a VPC Network Peering.
I want my CI (which is in different project) to be able to access private kube master.
For this, each GKE private cluster has Master Authorized Networks, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.
If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.
If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "unify" the source of your requests to match the authorized addresses.
Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:
There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.
not supported by google, workarounds exists (but they are dirty):
https://issuetracker.google.com/issues/244483997 , custom routes export/import has no effect
Google finally added custom routes export to VPC peering with master subnet. So the problem is now gone, you can access private master from different VPC or through VPN.
So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.
I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.
And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.
Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?
Thanks.
Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.
In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.
If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.
From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.
I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.