Route53 Routing Policies in VPCs - amazon-route53

I am deploying an application to us-west-2, us-east-1, and eu-central-1. In each region, I will have a Lambda function and an EC2 instance in the same subnet. (That is, I will have a VPC in us-east-1, a VPC in us-west-2, and a VPC in eu-central-1, and in each VPC I will have a Lambda function and an EC2 instance in the same subnet.)
The VPC's will be connected by VPC Peering links and will have non-overlapping CIDR blocks.
The lambda function must connect to a service hosted by the EC2 instance.
I want to set things up so that the lambda in us-west-2 will be routed to the EC2 instance in us-west-2, but if that EC2 instance isn't available, then will be routed to the EC2 instance in one of the other 2 regions. And do the same thing for us-east-1 (so the lambda there connects to the EC2 instance there and fails over to the other 2 regions if necessary) and eu-central-1.
I can set up a private hosted zone in Route53 to do name resolution so the lambda can find the EC2 instance's IP address. How do I configure routing policies so that each lambda is preferentially routed to the EC2 instance in its own region but can fail over to the other 2 regions if its local EC2 instance is unavailable?

Related

Google cloud SQL access from multiple VPC

I'm trying to create GCP postgreSQL instance and make it accessible from multiple VPC networks with in one project.
We have VMs in 4 GCP regions. Each region has it's own VPC network and all are peered. But when I create SQL instance I can map its private IP only to one VPC, other don't have access to it.
Is it any steps to follow which will allow to access from multiple VPCs to one SQL instance?
When you configure a Cloud SQL instance to use private IP, you use private services access. Private services access is implemented as a VPC peering connection between your VPC network and the Google services VPC network where your Cloud SQL instance resides.
That said, currently your approach is not possible. VPC network peering has some restrictions, one of which is that only directly peered networks can communicate with each other- transitive peering is not supported.
As Cloud SQL resources are themselves accessed from ‘VPC A’ via a VPC network peering, other VPC networks attached to ‘VPC A’ via VPC network peering cannot access these Cloud SQL resources as this would run afoul of the aforementioned restriction.
On this note, there’s already a feature request for multiple VPC peerings with Cloud SQL VPC.
As a workaround, you could create a proxy VM instance using Cloud SQL proxy. See 1 and 2. For example, the proxy VM instance could be placed in the VPC to which your Cloud SQL instances are attached (VPC A, for example) and it would act as the Cloud SQL Proxy. VM instances in other VPCs connected to VPC A via VPC network peering could forward their SQL requests to the Cloud SQL Proxy VM instance in VPC A, which would then forward the requests to the SQL instance(s) and vice versa.

AWS EKS and VPC cloudformation

I'm creating EKS cluster and VPC via cloudformation. My VPC have four subnets and from that, I am giving two subnets to EKS cluster. But after giving two subnets It is giving error Subnets specified must be in at least two different AZs (Service: AmazonEKS; Status Code: 400; Error Code: InvalidParameterException where I already have given two subnets. When I give three subnets it creates EKS successfully.
I EKS cluster is of 3 node. I tried to create of 2 node also but it not worked.
My VPC info.
Subnet01Block 192.168.0.0/24
Subnet02Block 192.168.64.0/24
Subnet03Block 192.168.128.0/24
Subnet04Block 192.168.192.0/24
VpcBlock 192.168.0.0/16
As per docs, you must select different subnets which belong to different AZs. So you need to update your VPC configuration.
When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use. Amazon EKS requires subnets in at least two Availability Zones
When you select subnets for EKS, in the options, next to the subnets you see letters- a,b,c etc. Choose unique letters of the same subnet and you should be good to go.

Terraform GCP: Unable to reach Private Kubernetes Master to create kubernetes_secret

When I try to reach a private Kubernetes master using a Master Authorized VM from a different VPC, where Terraform configs are executed, I am unable to reach it and Terraform errors out to create a Kubernetes secrets.
Error: dial tcp (master-public-or-private-endpoint):443: i/o timeout
Google Cloud VPCs are configured with private IP addresses (RFC 1918). This means that VPCs cannot talk to each other using private IP addresses. RFC 1918 addresses are not routable outside the VPC.
You have a few solutions:
Using a public IP addresses for Kubernetes. However, that defeats the purpose of setting your cluster private.
Setup VPC Network Peering. This will connect the two VPCs together. The two VPCs cannot use overlapping CIDR ranges.
Setup a VPN server on GCE in one VPC and connect to the VPN server from the GCE instance in the other VPC.
Setup Google Cloud VPN.

How can I access external MongoDB server on ec2 instance from an app running inside Kubernetes cluster created with kops?

I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic

How to access Mongodb in ec2 instance using lambda in vpc setup?

In security group(inbound rules) of ec2 instance i have opened mongo port to security group of lambda function
EC2 inbound rules screenshot
In lambda function also i have selected vpc,subnets and security group.
Lambda vpc configuration screenshot
Also i have configured NAT gateway for vpc.
Still i am not able to access mongodb
You should enter the subnets in your VPC (which is used by your lambda function) to the security group.
e.g.
but in this case, change the 3306 to mongodb port e.g. 27017