AWS ALB's Port based routing to ECS tasks - amazon-ecs

I have two tasks in AWS ECS.
A. is a default target, mysite.com
B. forwarding rule , path based "/api/*"
I want to also forward to task B container when the request has a separate port specified like
mysite.com:12345
Is it possible?
I tried to add a new listener beyond 443 and 80 but it shows a warning that it's not reachable because the security group is not allowing it(and I don't think I can change the security group).

If you want requests to reach your load balancer on that port, at least one security group on the load balancer must allow inbound traffic on that port. You can either edit an existing SG attached to the load balancer, or add a new SG to the load balancer.

Related

AWS ECS 503 Service Temporarily Unavailable error

I'm following a tutorial How to Deploy a Dockerised Application on AWS ECS With Terraform and running into a 503 error trying to hit my App.
The App runs fine in a local Container (http://localhost:3000/contacts), but is unreachable via ECS deployment. When I check the AWS Console, I see health checks are failing, so there's no successful deployment of the App.
I've read through / watched a number of tutorials, and they all have the same configuration as in the tutorial mentioned above. I'm thinking something must have changed on the AWS side, but I can't figure it out.
I've also read a number of 503-related posts here, and tried various things such as opening different ports, and setting SG ingress wide open, but to no avail.
If anyone is interested in troubleshooting, and has a chance, here's a link to my code: https://github.com/CumulusCycles/tf-cloud-demo
Thanks for any insights anyone may have on this!
Cheers,
Rob
Your target group is configured to forward traffic to port 80 on the container. Your container is listening on port 3000. You need to modify your target group to forward traffic to the port your container is actually listening on:
resource "aws_lb_target_group" "target_group" {
name = "target-group"
port = 3000
protocol = "HTTP"
target_type = "ip"
vpc_id = "${aws_default_vpc.default_vpc.id}" # Referencing the default VPC
}
Your load balancer port is the only port external users will see. Your load balancer is listening on port 80 so people can hit it over HTTP without specifying a port. When the load balancer receives traffic on that port it forwards it to the target group. The target group receives traffic and then forwards it to an instance in the target group, on the configured port.
It does seem a bit redundant, but you need to specify the port(s) that your container listens on in the ECS task definition, and then configure that same port again in both the target group configuration, and the ECS service's load balancer configuration. You may even need to configure it again in the target group's health check configuration if the default health checks don't work for your application.
Note: If you look at the comments on that blog post you linked, you'll see several people saying the same thing about the target group port mapping.

Is AWS NLB supported for ECS?

Question
Is NLB supported for ECS with dynamic port mapping?
Background
It looks there are attempts to use NLB with ECS but problems with health check.
Network Load Balancer for inter-service communication
Health check interval for Network Load Balancer Target Group
NLB Target Group health checks are out of control
When talked with AWS, they acknowledged that the NLB documentation of health check interval is not accurate as NLB has multiple instances sending health check respectively, hence the interval when an ECS task will get health check is not according to the HealthCheckIntervalSeconds.
Also the ECS task page says specifically about ALB to use the dynamic port mapping.
Hence, I suppose NLB is not supported for ECS? If there is a documentation which states NLB is supported for ECS, please suggest.
Update
Why are properly functioning Amazon ECS tasks registered to ELB marked as unhealthy and replaced?
Elastic Load Balancing is repeatedly flagging properly functioning Amazon Elastic Container Service (Amazon ECS) tasks as unhealthy. These incorrectly flagged tasks are stopped and new tasks are started to replace them. How can I troubleshoot this?
change the Health check grace period to an appropriate time period for your service
A Network Load Balancer makes routing decisions at the transport layer (TCP/SSL). It can handle millions of requests per second. After the load balancer receives a connection, it selects a target from the target group for the default rule using a flow hash routing algorithm. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. It forwards the request without modifying the headers. Network Load Balancers support dynamic host port mapping.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb

send request from kubernetes pods through load balancer ip

I have a k8s cluster on DigitalOcean using traefik 1.7 as Ingress Controller.
our domain point to the load balancer ip created by trafik.
All incomming request go through load balancer ip and be routed by trafik to proper service.
Now I want to perform HTTP requests from my services to an external system which only accepts registered IPs.
Can I provide them load balancer's IP and make all outbound requests go through load balancer IP? or I need to provide them all node's public IPs?
thanks
You can do either of them.
But the best solution to this would be to make all the traffic go through load balancer assuming this is some proxy server with tunnelling capabilities and open comms through load balancer IP on your external system. Because, imagine, right now you might be having a dozen of nodes running 100 micro services and now you opened your external system security group to allow traffic from dozen.
But in next few months you might go from 12 to 100 nodes and the overhead of updating your external system's security group whenever you add a node in DigitalOcean.
But you can also try a different approach by adding a standalone proxy server and route traffic through it from your pods. Something like [this] (Kubernetes outbound calls to an external endpoint with IP whitelisting).
Just a note, it's not just these options there are several ways one can achieve this, one another approach would be associating a NAT IP to all your nodes and keeping every node behind a private network would also work. It all depends on how you want to set it up and the purpose of the system you are planning to achieve.
Hope this helps.
Unfortunately, Ingress resources can't use outbound requests.
So you need to provide all nodes public IPs.
Another idea, if you use a forward proxy(e.g. nginx, haproxy), you can limit the nodes where forward proxy pods are scheduled by setting nodeSelector.
By doing so, I think you can limit the nodes that provide public IP addresses.
Egress packets from a k8s cluster to cluster-external services have node's IP as the source IP. So, you can register k8s nodes' IPs in the external system to allow egress packets from the k8s cluster.
https://kubernetes.io/docs/tutorials/services/source-ip/ says egress packets from k8s get source NAT'ed with node's IP:
Source NAT: replacing the source IP on a packet, usually with a node’s IP
Following can be used to send egress packets from a k8s cluster:
k8s network policies
calico CNI egress policies
istio egress gateway
nirmata/kube-static-egress-ip GitHub project
kube-static-egress-ip provides a solution with which a cluster operator can define an egress rule where a set of pods whose outbound traffic to a specified destination is always SNAT'ed with a configured static egress IP. kube-static-egress-ip provides this functionality in Kubernetes native way using custom rerources.

Change path while forwarding with AWS Elastic Load Balancer

I have a number of containers running in Amazon ECS (in a private subnet) and each serving a different app on port 8080.
I have a public-facing ELB (attached to apps.example.com) forwarding traffic based on the requested path. To illustate, apps.example.com/app1 is forwarded to the target group for the app1 service on port 8080.
The problem I have is that the apps running in the containers are not expecting a path.
Right now, it seems like apps.example.com/app1 is forwarded to private_app1_container:8080/app1 but I need it to be forwarded to private_app1_container:8080.
Is there a way to achieve that?
I am creating the forwarding rules via the aws web interface and while I can forward to a specific target group, I do not see a way to specify the forwarding path. I have thought of redirecting instead of forwarding but my containers are in a private subnet and I would like them to stay isolated.

Restrict aws security groups on kubernetes cluster

I created my kubernetes cluster with specified security group for each ec2 server type, for example for backend server I have backend-sg associated with and a node-sg which is created with the cluster.
Now I try to restrict access to my backend ec2 and open only port 8090 as an inbound and port 8080 as an outbound to a specific security group (lets call it frontend-sg).
I was manage to do so but when changing the inbound port to 8081 in order to check that those restrictions actually worked I was still able to acess port 8080 from the frontend-sg ec2.
I think I am missing something...
Any help would be appreciated
Any help would be appriciated
I will try to illustrate situation in this answer to make it more clear. If I'm understanding your case correctly, this is what you have so far:
Now if you try ports from Frontend EC2 instance to Backend EC2 instance, and they are in same security group (node-sg) you will have traffic there. If you want to check group isolation then you should have one instance outside of node-sg and only in frontend-sg targetting any instance in backend-sg (supposing that both node-sg and backend-sg are not permitting said ports for inbound traffic)...
Finally, a small note... Kubernetes is by default closing all traffic (and you need ingress, loadbalancer, upstream proxy, nodePort or some other means to actually expose your front-facing services) so traditional fine graining of backend/frontend instances and security groups is not that "clearcut" when using k8s, especially since you don't really want to schedule manually (or by labels for that matter) which instances pods will actually run (but instead leave that to k8s scheduler for better unitilization of resources).