I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.
Related
I have an App Engine app, which connects securely to Mongo Atlas via a network peering connection which is all working fine.
I have come to want to make the app multi-region, which means creating multiple projects and therefore reproducing the various GCP infrastructure, including the peering connection. However when reproducing this connection, I cannot due to the IP conflict at the Mongo Atlas side between the two "default" VPC in each project.
I can create the VPC network peering in the GCP end OK, sharing the "default" VPC and setting the same Mongo project/network IDs. The default VPC has ranges for each region , e.g. us-west1=10.138.0.0/20, us-west2=10.168.0.0/20 (my original app region), and us-west4=10.182.0.0/20 - the 2nd region I am setting up.
At the Mongo DB end, their CIDR block is fixed at 192.168.0.0/16 and cannot be changed. But when I enter the new GCP project ID and "default" VPC, it throws this error:
Error trying to process asynchronous operation: An IP range in the peer network (10.138.0.0/20) overlaps with an IP range (10.138.0.0/20) in an active peer (peer-ABCXYZ) of the local network.
I understand that the IP ranges can't overlap as there would be routing ambiguity. So I'd like to know how to resolve this and connect from both projects.
I noticed that the error was about 10.138 which is us-west1 region, which I'm not even using. So is there a way to limit each VPC peering to only share the region for the project? If I could do that for each, there would be no overlap.
Mongo DB has a document about this problem, but this only discusses an AWS solution and only from their perspective, not saying how to set up the other end.
https://docs.atlas.mongodb.com/security-vpc-peering/#network-peering-between-an-service-vpc-and-two-virtual-networks-with-identical-cidr-blocks
GCP has a document about the problem, but doesn't seem to offer a resolution, just "you can't do this"
https://cloud.google.com/vpc/docs/vpc-peering#overlapping_subnets_at_time_of_peering
I'm guessing I will need to create a new VPC perhaps with region-limited subnets and only share that VPC? I had a look at "Create VPC network" but it got complex pretty quickly.
What I want is something like:
Project A, us-west2=10.168.0.0/20 <==> Mongo Atlas 192.168.0.0/16
Project B, us-west4=10.182.0.0/20 <==> Mongo Atlas 192.168.0.0/16
This question is similar, but there is no specific instructions (as the OP didn't want the second connection anyway) Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer
Update
I have since found one of the reasons this became a problem is because when originally setting up the first app 2 years ago, I just used the "default" VPC which itself defaults to "auto mode" which automatically creates subnets for all regions present and future. This can be a time-saver, but GCP recommends not to use this in production - for many reasons including my problem! If you want more control over the subnets and avoiding conflicts etc, they recommend you use a "custom mode" VPC where you have to define all the subnets yourself.
In my case I didn't need this super VPC of all possible regions in the world, but just one region. So now I will have to convert it to custom-mode and prune back the other regions I'm not using in this project, to be able to resolve the overlap (even if I do use a single-region subnet in another project, I still need to remove them from the original project to avoid the conflict).
You are right, if you use default VPC, you have VPCs in all regions and the peering failed because of the overlap.
There is 2 solutions:
Create a custom VPC in each region/project to create a clean peering
Or (my favorite), create a shared VPC and add all the region/project to the host project. At the end, it's the same project, but only in multiregion, sharing the VPC layer make a lot of sense.
Guillaume's answer is correct, but I thought I'd add my specific working recipe including how I avoided the conflict without having to reconfigure my original app.
I was going to convert my original app's auto-mode VPC into a custom one, then remove the regions I'm not using (all but us-west2). I practiced this in a different project and seemed to work quickly and easily, but I wanted to avoid any disruption to my production app.
After researching the IP ranges used by the auto-mode VPC, I realised I can just create a new VPC in my second region using any spare "local" IP range, as long as I avoid both the GCP auto-range of 10.128.0.0/20 (10.128.0.0 - 10.255.255.255) and the Mongo Atlas range of 192.168.0.0/16 (192.168.0.0 - 192.168.255.255), so I chose 10.1.0.0/16.
Then performed these steps:
Create custom VPC "my-app-engine" in my second region project
Add 1 subnet "my-app-engine-us-west4" region: "us-west4" 10.1.0.0/16
Add VPC Peering to this network both at GCP + Mongo Atlas and wait for it to connect
Add the subnet range 10.1.0.0/16 to Atlas Network Access > IP Access List
Re-deployed the app into this VPC with extra app.yaml settings:
network: my-app-engine
subnetwork_name: my-app-engine-us-west4
You have to specify subnetwork_name as well in the app.yaml for custom VPCs, but not for auto-mode ones.
The "IP Access List" caught me out for a while as I'd forgotten you also have to open the Mongo firewall to the new VPC Peering, and was still getting connection timeouts to the cluster, even though the peering was setup.
So now I have two VPC peerings, the original overstuffed one, and this new slim one on a custom network. I will eventually redeploy the old app in this slimmer pattern once the new region is working.
I have an ECS-Fargate cluster created inside VPC.
If I want to access above mentioned AWS services from fargate task, what needs to be done?
I see following options from different documentations I read:
Create private link to each AWS service
Create NAT gateway
Not sure which one is correct and recommended option?
To be clear, an ECS cluster is an abstracted entity and does not dictate where you connect the workloads you are running within it. If we stick to the Fargate launch type this means that tasks could be launched either on a private subnet or on a public subnet:
If you launch them in a public subnet (and you assign a public IP to the tasks) then these tasks can reach the public endpoints of the services you mentioned and nothing else (from a networking routing perspective) is required.
If you launch them in a private subnet you have two options that are those you called out in your question.
I don't think there is a golden rule for what's best. The decision is multi-dimensional (cost, ease of setup, features, observability and control, etc). I'd argue the NAT GW route is easier to setup regardless of the number of services you need to add but you may lose a bit of visibility and all your traffic will go outside of the VPC (for some customers this is ok, for others it's not). Private Links will give you tighter control but they may be more work to setup (especially if you need to reach many services).
I am facing one scenario where I have to access one Kubernetes service of GCP PROJECT X from a pod running in another GCP Project Y.
I know we can access service from one namespace in another namespace in the same project by using
servicename.namespacename.svc.cluster.local
how can I do if I have to do similar across different GCP projects?
Agree with #cperez08, but adding my 5 cents.
I think you can try Set up clusters with Shared VPC
With Shared VPC, you designate one project as the host project, and
you can attach other projects, called service projects, to the host
project. You create networks, subnets, secondary address ranges,
firewall rules, and other network resources in the host project. Then
you share selected subnets, including secondary ranges, with the
service projects. Components running in a service project can use the
Shared VPC to communicate with components running in the other service
projects.
You can use Shared VPC with both zonal and regional clusters. Clusters
that use Shared VPC cannot use legacy networks and must have Alias IPs
enabled.
You can configure Shared VPC when you create a new cluster. Google
Kubernetes Engine does not support converting existing clusters to the
Shared VPC model.
If I understood well project X and Y are completely different clusters, thus, I am not sure if that's possible, take a look to this https://kubernetes.io/blog/2016/07/cross-cluster-services/ maybe you can have re-architect your services by federating in case High Availability is needed.
On the other hand, you can always access to the resources through a public endpoint/domain if they are not in someway connected.
Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:
balancers can't use master ip as a backend in any way
routers can't have next-hop-ip from another network
I can't (on my own) peer different VPC network with master private network
when I peer GKE VPC with another VPC, those routes are not propagated
Any solution here?
PS: Besides creating a standalone proxy or using third-party router...
I have multiple gcp projects, kube clusters are in separate project.
This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.
For cross-project VPC peering, you need to set up a VPC Network Peering.
I want my CI (which is in different project) to be able to access private kube master.
For this, each GKE private cluster has Master Authorized Networks, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.
If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.
If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "unify" the source of your requests to match the authorized addresses.
Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:
There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.
not supported by google, workarounds exists (but they are dirty):
https://issuetracker.google.com/issues/244483997 , custom routes export/import has no effect
Google finally added custom routes export to VPC peering with master subnet. So the problem is now gone, you can access private master from different VPC or through VPN.
So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.
I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.
And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.
Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?
Thanks.
Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.
In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.
If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.
From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.