Cloudformation AWS EIP fails with invalid domain vpc - aws-cloudformation

I am trying to set up and elastic IP for a network load balancer but every time I create the stack it fails specifying that the domain vpc is an invalid parameter value although the specified vpc physical id exists and it is created before.

Have you got Domain: vpc set in your CloudFormation template?
This should work:
ElasticIP:
Type: AWS::EC2::EIP
Properties:
Domain: vpc # NOT vpc-1234abcd !!
If you don't specify Domain: vpc you won't be able to attach the EIP to resources inside the VPC, e.g. to the load balancer.

Related

Route53 Routing Policies in VPCs

I am deploying an application to us-west-2, us-east-1, and eu-central-1. In each region, I will have a Lambda function and an EC2 instance in the same subnet. (That is, I will have a VPC in us-east-1, a VPC in us-west-2, and a VPC in eu-central-1, and in each VPC I will have a Lambda function and an EC2 instance in the same subnet.)
The VPC's will be connected by VPC Peering links and will have non-overlapping CIDR blocks.
The lambda function must connect to a service hosted by the EC2 instance.
I want to set things up so that the lambda in us-west-2 will be routed to the EC2 instance in us-west-2, but if that EC2 instance isn't available, then will be routed to the EC2 instance in one of the other 2 regions. And do the same thing for us-east-1 (so the lambda there connects to the EC2 instance there and fails over to the other 2 regions if necessary) and eu-central-1.
I can set up a private hosted zone in Route53 to do name resolution so the lambda can find the EC2 instance's IP address. How do I configure routing policies so that each lambda is preferentially routed to the EC2 instance in its own region but can fail over to the other 2 regions if its local EC2 instance is unavailable?

Cannot create API Management VPC Link in AWS Console

I'm failing to add a VPC Link to my API Gateway that will link to my application load balancer. The symptom in the AWS Console is that the dropdown box for Target NLB is empty. If I attempt to force the issue via the AWS CLI, an entry is created; but the status says NLB ARN is malformed.
I've verified the following:
My application load balancer is in the same account and region as my API Gateway.
My user account has admin privileges. I created and added the recommended policy just in case I was missing something.
The NLB ARN was copied directly from the application load balancer page for the AWS CLI creation scenario.
I can invoke my API directly on the ECS instance (it has a public IP for now).
I can invoke my API through the application load balancer public IP.
Possible quirks with my configuration:
My application load balancer has a security group which limits access to a narrow range of IPs. I didn't think this would matter since VPC links are suppose to connect with the private DNS.
My ECS instance has private DNS enabled.
My ECS uses EC2 launch type, not Fargate.
Indeed, as suggested in a related post, my problem stems from initially creating an ALB (Application Load Balancer) rather than an NLB (Network Load Balancer). Once I had an NLB configured properly, I was able to configure the VPC Link as described in the AWS documentation.

Access Redshift cluster deployed in a VPC

I have my Redshift cluster deployed in a VPC inside private subnets . I need to allow an IP address to access the cluster from outside the VPC . To add that IP as a whitelist and access the cluster I tried the below .
Created an inbound rule in the security group which is attached to the redshift cluster . Added the ip-address/32 as source , port 5439 , protocol tcp , type redshift.
Added the redshift cluster in the public subnet .
I did check in https://forums.aws.amazon.com/thread.jspa?threadID=134301 . He faced the same issue too .
The steps I tried didn't work . Appreciate any suggestion which can make that IP address to access the cluster.
Thanks in advance.
As the second step you did, I assume you've already put the Redshift cluster to public subnet in your VPC, then make sure your networkACL allows ingress port 5439 and egress ephemeral ports.
I think you need to make your redshift cluster "publicly accessible".
After that, just modify your associated VPC security group to allow access from specific IP addresses, and you should be able to connect to the cluster from outside the VP.
AWS forum
AWS documentation
If the IP address which is outside the VPC of Redshift is in your AWS account, or in an other account; the VPC peering between two VPC can be an option.
If you peer two VPCs; one with Redshift and the other is the VPC of the other IP address, then it is possible two enable network traffic between two
You should enable traffic by routing tables entries for new IP ranges too.
And the security group entries should be added into Redshift's Inbound rules

How can I access external MongoDB server on ec2 instance from an app running inside Kubernetes cluster created with kops?

I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic

How do I Allow Fargate cluster to access external mongodb database instance

I have built a Fargate cluster which is running my website. The service starts the task for the website properly but stops when it gets to trying to connect to my database instance.
MongoError: failed to connect to server [123.456.789.0:27017] on first connect [MongoError: connection 0 to 123.456.789.0:27017 timed out]
How do I add the Fargate cluster to the security group on my database instance. I don't have a public ip address for the fargate cluster that I can find or a range. I can't find any proper guides on the aws documentation that go over this.
If mongo is running outside your vpc
If you are running fargate inside private subnet of VPC. The ip
address will be NAT ip address found here NAT GATEWAY
If it's running inside public subnet. You can assign public ip address to your fargate task using network interfaces.
Late to the party (2022) but the way to to this is not using IP addresses but, instead, by adding the ECS SecurityGroup to the inbound rules of whatever we're trying to access (ex: RDS). So, in the RDS inbound SecurityGroup rules, instead of using IP addresses, you'd enter the ECS SecurityGroup identified (example: sg-asdkmh778e7tugfkjhb).