How to expose AWS Fargate ECS containers to internet with Route53 DNS? - amazon-ecs

I have an ECS Task running on fargate and my service get an public ip address automatically. How do I simply expose fargate task to internet with Route53 dns? I was looking around the internet for a whole day today an I couldn't find an example about this simplest possible scenario where I just want to expose a port from said task to the internet and map a Route53 record to point to its ip address.
If I understood correctly from the minimal information I found is that I would have to create an vpc endpoint but I couldn't find information about how to route the traffic to a task/container.

I am also having this problem. I was able to route traffic to a single container service in public subnets via application load balancers. However, now, I cannot reproduce this. Did you try the ALB yet?

Related

What will happen if AWS Fargate Tasks are provisioned in private subnet with VPC Endpoints and NAT Gateway enabled?

Firstly, I have Fargate tasks in private subnets of a VPC and enable NAT Gateway to get connected with ECR for pulling the images & other on-premise servers via the internet. It works perfectly. Later I setup VPC endpoints for ECR (api & dkr), S3, Secrets, logs & remove NAT Gateway, it is working for communication with AWS Services but getting the problem for communicating with on-premise servers. So I enable NAT Gateway and then my application seems working perfectly with on-premise servers. But what I am still unclear is the communication with AWS Services (ECR, S3, Secrets and CloudWatch) happens via internet or private network with VPC endpoints? Please suggest me how to debug the communications.
Thank you for your advices in advance ~
I follow Use a private subnet with internet access & I can ssh into the tasks without VPC Endpoints & NAT gateway enabled. I cannot ssh when I try with VPC endpoints method as the communication happens via private link. I still cannot ssh with VPC endpoints method and NAT Gateway enabled.
--I think I should able to ssh as NAT Gateway is enabled now.-
The VPC endpoints you are creating are specifically "Interface Endpoints". When you create an interface endpoint, AWS
will add an elastic network interface (ENI) to your specified subnets and assign it a private IP address in your subnet's address space. In general,
you'll also tell AWS to add a DNS entry for that ENI which resolves the service's domain name against that IP (insetad
of the public IP). You can disable this, but it kind of defeats the purpose.
This effectively means that anytime you try to resolve the hostname for that service, it should resolve to your ENI's
IP address and thus go over privatelink. However, it is important to note that you need to configure your CLI/SDK for the region your ENI is in. Otherwise, it may use the generic DNS entry (which may point to us-east-1 specifically). That will resolve just fine (thanks to your NAT Gateway), but if you are running in another region, your traffic may route unexpectedly over the internet.
All of this is independent of SSH. Remember, VPC Interface Endpoints are only used to create a private IP address that
can be used to route to AWS services. If you are trying to SSH into a Fargate task, that task just needs to be routable. In this particular case, your Fargate tasks are running in your VPC, and are apparently directly routable. No NAT Gateway or interface endpoints should be necessary to reach them.

AWS ECS Fargate Outbound Internet Connectivity

I've a small Fargate cluster with a service running and found that if I disable the public IP the container won't build as it doesn't have a route to pull the image.
The ELB for ECS Fargate is part of a subnet which has:
internet gateway configured and attached
route table allowing unrestricted outgoing
security policy on the ECS service allows unrestricted outgoing
DNS enabled
My understanding is the internet gateway is a NAT and the above actions should permit outgoing internet access however I can't make it so. What else is missing?
Just like all other resources in your AWS VPC, if you don't attach a public IP address, then it needs either to be placed in a subnet with a route to a NAT Gateway to access things outside the VPC, or it needs VPC endpoints to access those resources.
I have set-up a EBL for a persistent public & subnet IP. As far as I
can tell my subnet has outgoing internet unrestricted (internet
gateway attached and route opens up all outgoing traffic to 0.0.0.0/0.
I'm unsure if the service setup will configure the EC2 to use this
first then attempt to set-up the container. If not then it probably
doesn't apply.
ELB is for inbound traffic only, it does not provide any sort of outbound networking functionality for your EC2 or Fargate instance. The ELB is not in any way involved when ECS tries to pull a container image.
Having a volatile public IP address is a bit annoying as my
understanding is the security policy will apply to both the
ELB/Elastic provided IP and this one.
What "security policy" are you referring to? I'm not aware of security policies on AWS that are applied directly to IP addresses. Assuming you mean the Security Group when you say "security policy", your understanding is incorrect. Both the EC2 or Fargate instance and the ELB should have different security groups assigned to them. The ELB would have a security group allowing all inbound traffic, if you want it to be public on the Internet. The EC2 or Fargate instance should have a security group only allowing inbound traffic from the ELB (by specifying the ELB's security group ID in the inbound rule).
I want to point out you say "EC2" in your question and never mention Fargate, but you tagged your question with Fargate twice and didn't tag it with EC2. EC2 and Fargate are separate compute services on AWS. You would either be using one or the other. It doesn't really matter in this case given the issue you are encountering, but it helps to be clear in your questions.

Unable to access Kubernetes service from one cluster to another (over VPC peerng)

I'm wondering if anyone can help with my issue, here's the setup:
We have 2 separate kubernetes clusters in GKE, running on v1.17, and they each sit in a separate project
We have set up VPC peering between the two projects
On cluster 1, we have 'service1' which is exposed by an internal HTTPS load balancer, we don't want this to be public
On cluster 2, we intend on being able to access 'service1' via the internal load balancer, and it should do this over the VPC peering connection between the two projects
Here's the issue:
When I'm connected via SSH on a GKE node on cluster 2, I can successfully run a curl request to access https://service1.domain.com running on cluster 1, and get the expected response, so traffic is definitely routing from cluster 2 > cluster 1. However, when I'm running the same curl command from a POD, running on a GKE node, the same curl request times out.
I have run as much troubleshooting as I can including telnet, traceroute etc and I'm really stuck why this might be. If anyone can shed light on the difference here that would be great.
I did wonder whether pod networking is somehow forwarding traffic over the clusters public IP rather than over the VPC peering connection.
So it seems you're not using a "VPC-native" cluster and what you need is "IP masquerading".
From this document:
"A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses. This is useful in environments that expect to only receive packets from node IP addresses."
You can use ip-masq-agent or k8s-custom-iptables. After this, it will work since it will be like you're making a call from node, not inside of pod.
As mentioned in one of the answers IP aliases (VPC-native) should work out of the box. If using a route based GKE cluster rather than VPC-native you would need to use custom routes.
As per this document
By default, VPC Network Peering with GKE is supported when used with
IP aliases. If you don't use IP aliases, you can export custom routes
so that GKE containers are reachable from peered networks.
This is also explained in this document
If you have GKE clusters without VPC native addressing, you might have
multiple static routes to direct traffic to VM instances that are
hosting your containers. You can export these static routes so that
the containers are reachable from peered networks.
The problem your facing seems similar to the one mentioned in this SO question, perhaps your pods are using IPs outside of the VPC range and for that reason cannot access the peered VPC?
UPDATE: In Google cloud, I tried to access the service from another cluster which had VPC native networking enabled, which I believe allows pods to use the VPC routing and possibly the internal IPs.
Problem solved :-)

Possible to get static IP address for the Container deployed to Cloud Run?

I would like to deploy a container image to Google Cloud Run (fully managed). I follow the instructions:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy
I was wondering if I can fix static IP for the container or not. Please note that I am not using VM instance. I am new to use this service. I really appreciate it if you could help me on this issue.
You can get a static IP for your Cloud Run service (not individual containers, as many containers can be running the same app) by creating a "Cloud HTTP(S) Load Balancer" that serves on a static IP and putting your service behind it.
See the relevant section in documentation on how to create a LB and add a "serverless network endpoint group" behinding it that routes the traffic to Cloud Run.
There's also sample step-by-step guide on this with a load balancer with static IP at https://cloud.google.com/run/docs/multiple-regions.
If you mean "how do I get static IPs for outbound connections my Cloud Run app make", that's a different question with a different answer (it'll be possible soon).
Cloud Run is fully managed serverless containerised service. So you wont get access to IP address. You will get fix URL to the service (hash in the service name is unique to the project-service combination).
This feature is now available for Google Cloud Run services:
https://cloud.google.com/run/docs/configuring/static-outbound-ip

Deterministic connection to cloud-internal IP of K8S service or its underlying endpoint?

I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.
Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?
I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).
Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.
Check out my more thorough answer here, but the most common solution to this is to create bastion routes in your GCP project.
In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.
If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.
GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.