ECS+NLB does not support dynamic port hence only 1 task per EC2 instance? - amazon-ecs

Please confirm if these are true, or please point to the official AWS documentations that describes how to use dynamic port mapping with NLB and run multiple same tasks in an ECS ES2 instance. I am not using Fargate.
ECS+NLB does NOT support dynamic port mapping, hence
ECS+NLB can only allow 1 task (docker container) per EC2 instance in an ECS service
This is because:
AWS ECS Developer Guide - Creating a Load Balancer only mentions ALB that can use dynamic port, and not mention on NLB.
Application Load Balancers offer several features that make them attractive for use with Amazon ECS services:
* Application Load Balancers allow containers to use dynamic host port mapping (so that multiple tasks from the same service are allowed per container instance).
ECS task creation page clearly states that dynamic port is for ALB.
Network Load Balancer for inter-service communication quotes a response from the AWS support:
"However, I would like to point out that there is currently an ongoing issue with the NLB functionality with ECS, mostly seen with dynamic port mapping where the container is not able to stabilize due to health check errors, I believe the error you're seeing is related to that issue. I can only recommend that you use the ALB for now, as the NLB is still quite new so it's not fully compatible with ECS yet."
Updates
Found a document stating NLB supports dynamic port. However, if I switch ALB to NLB, ECS service does not work. When I log into an EC2 instance, an ECS agent is running but no docker container is running.
If someone managed to make ECS(EC2 type)+NLB work, please provide the step by step how it has been done.
Amazon ECS Developer Guide - Service Load Balancing - Load Balancer Types - NLB
Network Load Balancers support dynamic host port mapping. For example, if your task's container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the host port is dynamically chosen from the ephemeral port range of the container instance (such as 32768 to 61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the NGINX container is registered with the Network Load Balancer as an instance ID and port combination, and traffic is distributed to the instance ID and port corresponding to that container. This dynamic mapping allows you to have multiple tasks from a single service on the same container instance.

Related

How to use WebRTC with RTCPeerConnection on Kubernetes?

I would like to build a web application that processes video from users' webcams. It looks like WebRTC is ideal for this project. But, I'm having a hard time creating a peer connection between the user's machine and a pod in my Kubernetes cluster. How would you connect these two peers?
This question on Server Fault discusses the issue I'm running into: WEBRTC MCU/SFU inside kubernetes - Port Ranges. WebRTC wants a bunch of ports open so users can create peer connections with the server but Kubernetes has ports closed by default. Here's a rephrasing of my question: How to create
RTCPeerConnections connecting multiple users to an application hosted in a Kubernetes cluster? How should network ports be setup?
The closest I've come to finding a solution is Orchestrating GPU-accelerated streaming apps using WebRTC, their code is available on GitHub. I don't fully understand their approach, I believe it depends on Istio.
The document you link to is helpful, Orchestrating GPU-accelerated streaming apps using WebRTC
What they do to allow for RTCPeerConnection is:
Use two separate Node pools (group of Nodes):
Default Node pool - for most components, using Ingress and load balancer
TURN Node pool - for STUN/TURN service
STUN/TURN service
The STUN/TURN service is network bound and deployed to dedicated nodes. It is deployed with one instance on each node in the node pool. This can be done on Kubernetes using a DaemonSet. In addition this service should use host networking, e.g. all nodes has its ports accessible from Internet. Activate host networking for the PodTemplate in your DaemonSet:
hostNetwork: true
They use coturn as STUN/TURN server.
The STUN/TURN service is run as a DaemonSet on each node of the TURN node pool. The coTURN process needs to allocate a fixed block of ports bound to the host IP address in order to properly serve relay traffic. A single coTURN instance can serve thousands of concurrent STUN and TURN requests based on the machine configuration.
Network
This part of their network diagram shows that some services are served over https with an ingress gateway, whereas the STUN/TURN service is through a different connection using dtls/rtp to the nodes exposed via host network.

Is AWS NLB supported for ECS?

Question
Is NLB supported for ECS with dynamic port mapping?
Background
It looks there are attempts to use NLB with ECS but problems with health check.
Network Load Balancer for inter-service communication
Health check interval for Network Load Balancer Target Group
NLB Target Group health checks are out of control
When talked with AWS, they acknowledged that the NLB documentation of health check interval is not accurate as NLB has multiple instances sending health check respectively, hence the interval when an ECS task will get health check is not according to the HealthCheckIntervalSeconds.
Also the ECS task page says specifically about ALB to use the dynamic port mapping.
Hence, I suppose NLB is not supported for ECS? If there is a documentation which states NLB is supported for ECS, please suggest.
Update
Why are properly functioning Amazon ECS tasks registered to ELB marked as unhealthy and replaced?
Elastic Load Balancing is repeatedly flagging properly functioning Amazon Elastic Container Service (Amazon ECS) tasks as unhealthy. These incorrectly flagged tasks are stopped and new tasks are started to replace them. How can I troubleshoot this?
change the Health check grace period to an appropriate time period for your service
A Network Load Balancer makes routing decisions at the transport layer (TCP/SSL). It can handle millions of requests per second. After the load balancer receives a connection, it selects a target from the target group for the default rule using a flow hash routing algorithm. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. It forwards the request without modifying the headers. Network Load Balancers support dynamic host port mapping.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb

how Amazon ECS Service Discovery discovers dynamic ports

Amazon ECS Service Discovery makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53, for example backend.corp
However, assuming the use case of a web based app, host is not enough to communicate with service - also port number is required, especially when using dynamic port allocation on host(fixed container port is mapped to random host port)
How to manage dynamic port allocation with ECS Service Discovery? Sure, it is possible to use well-knows ports, but is limits number of hosts docker image can be run on.
ECS Service Discovery will register an SRV record for each task that is a combination of the Container Name and the Port (See Service Discovery Considerations). You can query these values to find the list of containers to which you can connect.
Update:
How you query DNS will be very dependant on your specific project, and the language and framework involved. In Java, for example, you'd use JNDI, in python you could use the dnspython library, and node, you'd probably use the built in dns module.

Can reverse proxy in Service Fabric be used with multiple windows containers?

I'm evaluating using SF or docker swarm for container orchestration and I can see service fabric has an edge by being able to use reverse proxy implementation which runs on all nodes in cluster. Problem is that I can see that based on cluster manifest only one port can be used as reverse proxy port and hence I'm not fully understanding how this can be utilized if you have multiple windows containers running with each of those running on their own port. I need to use port:port mapping only (with no HTTP rewrite), so ultimately wanted one to one reverse port mapping to each individual windows container running.
Is it possible to accomplish by using service fabric?
To be clear I have www.app1.com and www.app2.com hosted in 2 different containers, they don't need to talk to each other. I deploy those to service fabric, how do I use reverse proxy with single published external port to reach those containers externally?
At this point in time (version 5.6 of Service Fabric), Reverse Proxy will do the service resolution using the Service Fabric naming service and provide the URI to get to your service. The URL that reverse proxy will find your service on is specific to Service Fabric - e.g. http://clusterFQDN/appName/serviceName:port.
What you can use the DNS Service to get you a container IP (the IP of a host node in the cluster, running your container). However, you can only find the port by doing a DNS SRV record lookup.
Current best options for exposing containers in a Service Fabric cluster are:
If you have a fixed host port for your container, the Azure load balancer will be able to monitor where the container lives, and forward requests to only those nodes. You can add additional public IPs to your Load Balancer and use one per container. Cannot be used with dynamic host ports in the cluster.
Azure API Management can resolve Service Fabric services by integrating with the Service Fabric Naming Service.
Create your own HTTP Gateway as a Reliable Service: https://github.com/weidazhao/Hosting or https://github.com/c3-ls/ServiceFabric-Http
Running Nginx as a service in the cluster: Based on this prototype you can run and configure Nginx in Service Fabric: https://github.com/knom/ServiceFabric-Nginx
Yes you can use Reverse proxy with multiple containers. The idea is simple
Configure port to host mapping so your host knows which port your
application is listening
Configure container to container so your
container register a end point with service fabric. You can choose
the port for this endpoint. This will be registered with Naming
service and available for reverse proxy
Communication between containers can be done using reverse proxy using the service name and the port you specified. if you didn't specified the port number then service fabric will assign one for you and you can get it using environment variable.
Service Fabric team have excellent documentation about this here
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-container-linux

Hitting an endpoint of HeadlessService - Kubernetes

We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?
It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.
One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.
You can setup node ports via:
the services: https://kubernetes.io/docs/user-guide/services/#type-nodeport
or directly in the Pod by defining spec.containers[].ports[].hostPort
Another alternative is to use a LoadBalancer, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.