I have a number of containers running in Amazon ECS (in a private subnet) and each serving a different app on port 8080.
I have a public-facing ELB (attached to apps.example.com) forwarding traffic based on the requested path. To illustate, apps.example.com/app1 is forwarded to the target group for the app1 service on port 8080.
The problem I have is that the apps running in the containers are not expecting a path.
Right now, it seems like apps.example.com/app1 is forwarded to private_app1_container:8080/app1 but I need it to be forwarded to private_app1_container:8080.
Is there a way to achieve that?
I am creating the forwarding rules via the aws web interface and while I can forward to a specific target group, I do not see a way to specify the forwarding path. I have thought of redirecting instead of forwarding but my containers are in a private subnet and I would like them to stay isolated.
Related
I'm following a tutorial How to Deploy a Dockerised Application on AWS ECS With Terraform and running into a 503 error trying to hit my App.
The App runs fine in a local Container (http://localhost:3000/contacts), but is unreachable via ECS deployment. When I check the AWS Console, I see health checks are failing, so there's no successful deployment of the App.
I've read through / watched a number of tutorials, and they all have the same configuration as in the tutorial mentioned above. I'm thinking something must have changed on the AWS side, but I can't figure it out.
I've also read a number of 503-related posts here, and tried various things such as opening different ports, and setting SG ingress wide open, but to no avail.
If anyone is interested in troubleshooting, and has a chance, here's a link to my code: https://github.com/CumulusCycles/tf-cloud-demo
Thanks for any insights anyone may have on this!
Cheers,
Rob
Your target group is configured to forward traffic to port 80 on the container. Your container is listening on port 3000. You need to modify your target group to forward traffic to the port your container is actually listening on:
resource "aws_lb_target_group" "target_group" {
name = "target-group"
port = 3000
protocol = "HTTP"
target_type = "ip"
vpc_id = "${aws_default_vpc.default_vpc.id}" # Referencing the default VPC
}
Your load balancer port is the only port external users will see. Your load balancer is listening on port 80 so people can hit it over HTTP without specifying a port. When the load balancer receives traffic on that port it forwards it to the target group. The target group receives traffic and then forwards it to an instance in the target group, on the configured port.
It does seem a bit redundant, but you need to specify the port(s) that your container listens on in the ECS task definition, and then configure that same port again in both the target group configuration, and the ECS service's load balancer configuration. You may even need to configure it again in the target group's health check configuration if the default health checks don't work for your application.
Note: If you look at the comments on that blog post you linked, you'll see several people saying the same thing about the target group port mapping.
I have two tasks in AWS ECS.
A. is a default target, mysite.com
B. forwarding rule , path based "/api/*"
I want to also forward to task B container when the request has a separate port specified like
mysite.com:12345
Is it possible?
I tried to add a new listener beyond 443 and 80 but it shows a warning that it's not reachable because the security group is not allowing it(and I don't think I can change the security group).
If you want requests to reach your load balancer on that port, at least one security group on the load balancer must allow inbound traffic on that port. You can either edit an existing SG attached to the load balancer, or add a new SG to the load balancer.
Right now im setting up a Kubernetes cluster with Azure Kubernetes Service (AKS).
Im using the feature "Bring your own Subnet" and Kubenet as a network mode.
As you can see in the diagram, on the left side is an example vm.
In the middle is a load balancer I set up in the cluster, who directs incoming traffic to all pods with the label "webserver", this works fine.
On the right side is an example node of the cluster.
My problem is the outgoing traffic of nodes. As you would expect, if you try to ssh into a vm in subnet 1 from a node in subnet 2, it uses the nodes-ip for connecting, the .198. (Red Line)
I would like to route the traffic over the load balancer, so the incoming ssh connection at the vm in subnet 1 has a source address of .196. (Green Line)
Reason: We have got a central firewall. To open ports, I have to specify the ip-address, from which the package is coming from. For this case, I would like to route the traffic over on central load balancer so only one ip has to be allowed through in the firewall. Otherwise, every package would have the source ip of the node.
Is this possible?
I have tried to look this use case up in the azure docs, but most of the times it talks about the usage of public ips, which i am not using in this case.
I have a container with an exposed port in a pod. When I check the log in the containerized app, the source of the requests is always 192.168.189.0 which is a cluster IP. I need to be able to see the original source IP of the request. Is there any way to do this?
I tried modifying the service (externalTrafficPolicy: Local) instead of Cluster but it still doesn't work. Please help.
When you are working on an application or service that needs to know the source IP address you need to know the topology of the network you are using. This means that you need to know how the different layers of loadbalancers or proxies works to deliver the traffic to your service.
Depending on what cloud provider you are using or the loadbalancer you have in front of your application the source IP address should be on a header of the request. The header you have to look for is X-Fordwared-for, more info here, depending on the proxy or loadbalancer you are using sometimes you need to activate this header to receive the correct IP address.
I have a Kubernetes cluster running several different applications... one of my PHP applications is calling an external service that requires that the caller's IP address is whitelisted with the service. Since this is a Kubernetes cluster and the IP address can change, I could have the IP address that is currently running my application whitelisted, but it may not stay that way. Is there a "best practice" to whitelist an IP from a Kubernetes cluster?
To achieve this, you need to add IP addresses of your Kubernetes nodes to the whitelist of your external services. When you call something external from pod, your request goes through the node interface and has node’s external IP. In case your nodes have no external IPs and stay behind a router you need to add IP address of your router. Also, you might configure some kind of proxy, add proxy IP to the whitelist and every time go through this proxy to your external service.