I have built a Fargate cluster which is running my website. The service starts the task for the website properly but stops when it gets to trying to connect to my database instance.
MongoError: failed to connect to server [123.456.789.0:27017] on first connect [MongoError: connection 0 to 123.456.789.0:27017 timed out]
How do I add the Fargate cluster to the security group on my database instance. I don't have a public ip address for the fargate cluster that I can find or a range. I can't find any proper guides on the aws documentation that go over this.
If mongo is running outside your vpc
If you are running fargate inside private subnet of VPC. The ip
address will be NAT ip address found here NAT GATEWAY
If it's running inside public subnet. You can assign public ip address to your fargate task using network interfaces.
Late to the party (2022) but the way to to this is not using IP addresses but, instead, by adding the ECS SecurityGroup to the inbound rules of whatever we're trying to access (ex: RDS). So, in the RDS inbound SecurityGroup rules, instead of using IP addresses, you'd enter the ECS SecurityGroup identified (example: sg-asdkmh778e7tugfkjhb).
Related
Firstly, I have Fargate tasks in private subnets of a VPC and enable NAT Gateway to get connected with ECR for pulling the images & other on-premise servers via the internet. It works perfectly. Later I setup VPC endpoints for ECR (api & dkr), S3, Secrets, logs & remove NAT Gateway, it is working for communication with AWS Services but getting the problem for communicating with on-premise servers. So I enable NAT Gateway and then my application seems working perfectly with on-premise servers. But what I am still unclear is the communication with AWS Services (ECR, S3, Secrets and CloudWatch) happens via internet or private network with VPC endpoints? Please suggest me how to debug the communications.
Thank you for your advices in advance ~
I follow Use a private subnet with internet access & I can ssh into the tasks without VPC Endpoints & NAT gateway enabled. I cannot ssh when I try with VPC endpoints method as the communication happens via private link. I still cannot ssh with VPC endpoints method and NAT Gateway enabled.
--I think I should able to ssh as NAT Gateway is enabled now.-
The VPC endpoints you are creating are specifically "Interface Endpoints". When you create an interface endpoint, AWS
will add an elastic network interface (ENI) to your specified subnets and assign it a private IP address in your subnet's address space. In general,
you'll also tell AWS to add a DNS entry for that ENI which resolves the service's domain name against that IP (insetad
of the public IP). You can disable this, but it kind of defeats the purpose.
This effectively means that anytime you try to resolve the hostname for that service, it should resolve to your ENI's
IP address and thus go over privatelink. However, it is important to note that you need to configure your CLI/SDK for the region your ENI is in. Otherwise, it may use the generic DNS entry (which may point to us-east-1 specifically). That will resolve just fine (thanks to your NAT Gateway), but if you are running in another region, your traffic may route unexpectedly over the internet.
All of this is independent of SSH. Remember, VPC Interface Endpoints are only used to create a private IP address that
can be used to route to AWS services. If you are trying to SSH into a Fargate task, that task just needs to be routable. In this particular case, your Fargate tasks are running in your VPC, and are apparently directly routable. No NAT Gateway or interface endpoints should be necessary to reach them.
I've a small Fargate cluster with a service running and found that if I disable the public IP the container won't build as it doesn't have a route to pull the image.
The ELB for ECS Fargate is part of a subnet which has:
internet gateway configured and attached
route table allowing unrestricted outgoing
security policy on the ECS service allows unrestricted outgoing
DNS enabled
My understanding is the internet gateway is a NAT and the above actions should permit outgoing internet access however I can't make it so. What else is missing?
Just like all other resources in your AWS VPC, if you don't attach a public IP address, then it needs either to be placed in a subnet with a route to a NAT Gateway to access things outside the VPC, or it needs VPC endpoints to access those resources.
I have set-up a EBL for a persistent public & subnet IP. As far as I
can tell my subnet has outgoing internet unrestricted (internet
gateway attached and route opens up all outgoing traffic to 0.0.0.0/0.
I'm unsure if the service setup will configure the EC2 to use this
first then attempt to set-up the container. If not then it probably
doesn't apply.
ELB is for inbound traffic only, it does not provide any sort of outbound networking functionality for your EC2 or Fargate instance. The ELB is not in any way involved when ECS tries to pull a container image.
Having a volatile public IP address is a bit annoying as my
understanding is the security policy will apply to both the
ELB/Elastic provided IP and this one.
What "security policy" are you referring to? I'm not aware of security policies on AWS that are applied directly to IP addresses. Assuming you mean the Security Group when you say "security policy", your understanding is incorrect. Both the EC2 or Fargate instance and the ELB should have different security groups assigned to them. The ELB would have a security group allowing all inbound traffic, if you want it to be public on the Internet. The EC2 or Fargate instance should have a security group only allowing inbound traffic from the ELB (by specifying the ELB's security group ID in the inbound rule).
I want to point out you say "EC2" in your question and never mention Fargate, but you tagged your question with Fargate twice and didn't tag it with EC2. EC2 and Fargate are separate compute services on AWS. You would either be using one or the other. It doesn't really matter in this case given the issue you are encountering, but it helps to be clear in your questions.
We have a dedicated M10 cluster in Mongodb Atlas, on which I have created a peering connection with AWS to incorporate security using VPC. I have followed this Mongodb document for configuring peering connection between AWS and cluster.
https://docs.atlas.mongodb.com/security-vpc-peering/
The peering connection is created successfully and is active now. But the thing is, I am unable to connect to cluster without whitelisting my IP. When I try to connect without whitelisting the IP, it gives below error:
Something went wrong MongooseServerSelectionError: Could not connect
to any servers in your MongoDB Atlas cluster. One common reason is
that you're trying to access the database from an IP that isn't
whitelisted. Make sure your current IP address is on your Atlas
cluster's IP whitelist:
https://docs.atlas.mongodb.com/security-whitelist/
While after whitelisting the IP, I am able to connect to cluster successfully from local environment.
What do I need to access a cluster within VPC using application? I can not use the option IP whitelisting as every user's IP can not be whitelisted.
I have already whitelisted CIDR block as mentioned by the above documentation.
IP whitelisting is separate from peering. Peering determines the network, whitelisting determines who on the network is allowed access.
If you want to allow access from anything that has physical connectivity to the database, whitelist the entire world (0.0.0.0/0).
When I try to reach a private Kubernetes master using a Master Authorized VM from a different VPC, where Terraform configs are executed, I am unable to reach it and Terraform errors out to create a Kubernetes secrets.
Error: dial tcp (master-public-or-private-endpoint):443: i/o timeout
Google Cloud VPCs are configured with private IP addresses (RFC 1918). This means that VPCs cannot talk to each other using private IP addresses. RFC 1918 addresses are not routable outside the VPC.
You have a few solutions:
Using a public IP addresses for Kubernetes. However, that defeats the purpose of setting your cluster private.
Setup VPC Network Peering. This will connect the two VPCs together. The two VPCs cannot use overlapping CIDR ranges.
Setup a VPN server on GCE in one VPC and connect to the VPN server from the GCE instance in the other VPC.
Setup Google Cloud VPN.
I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic