Running Celery Flower behind Ambassador/Envoy reverse proxy - celery

I am trying to route to a flower server from an emissary/ambassador proxy with a root path different from /. Under the hood this is just an envoy front proxy.
There is an example of running celery flower behind nginx, but I am unable to replicate the same thing with ambassador.
My ambassador Mapping object is
apiVersion: getambassador.io/v2
generation: 16
host: strand.dev.REDACTED.info
kind: Mapping
metadata_labels:
ambassador_crd: strand-flower.front-proxy-internal
name: strand-flower
namespace: front-proxy-internal
prefix: /flower/
regex_rewrite:
pattern: ^/flower/(.*)$
substitution: /\1
service: http://strand-flower.default.svc.cluster.local:80
Which ends in an envoy configuration of
{
"match": {
"case_sensitive": true,
"headers": [
{
"exact_match": "strand.dev.REDACTED.info",
"name": ":authority"
}
],
"prefix": "/flower/",
"runtime_fraction": {
"default_value": {
"denominator": "HUNDRED",
"numerator": 100
},
"runtime_key": "routing.traffic_shift.cluster_http___strand_flower_default_svc-0"
}
},
"route": {
"cluster": "cluster_http___strand_flower_default_svc-0",
"priority": null,
"regex_rewrite": {
"pattern": {
"google_re2": {
"max_program_size": 200
},
"regex": "^/flower/(.*)$"
},
"substitution": "/\\1" <<< Is this supposed to be escaped?
},
"timeout": "3.000s"
}
}
This results in the static assets not being served
However, it looks like the path is not actually being rewritten because I need to go to
$HOSTNAME/flower/flower/ to get to the root, or to $HOSTNAME/flower/flower/task to get to the task part of the dashboard.
The flower server is started with --url_prefix=flower per the documentation here.
How do I get flower to work behind Ambassador?
nginx example
server {
location /flower/static {
alias /the/path/to/flower/static;
}
location /flower {
rewrite ^/flower/(.*)$ /$1 break;
proxy_pass http://localhost:5555;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
}

Answering my own question in-case anyone comes across this.
The problem is that you don't want the rewrite rule listed in the documentation.
You want an ambassador mapping that looks like this
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: strand-flower
spec:
prefix: /flower/
rewrite: /flower/
service: http://strand-flower.default.svc.cluster.local:80
host: "strand.dev.REDACTED.info"
The docs recommend that you rewrite rewrite /flower/thing/ -> /thing/, but instead you just want to pass the path through and make sure to set --url_prefix=flower when starting your flower server.

Related

Dynamic header based routing with fallback

I would like to route traffic to pods based on headers - with a fallback.
The desired result would be a k8s cluster where multiple versions of the same service could be deployed and routed to using header values.
svcA
svcB
svcC
each of these services (the main branch of git repo) would be deployed either to default namespace or labelled 'main'. any feature branch of each service can also be deployed, either into its own namespace or labelled with the branch name.
Ideally by setting a header X-svcA to a value matching a branch name, we would route any traffic to the in matching namespace or label. If there is no such name space or label, route the traffic to the default (main) pod.
if HEADERX && svcX:label
route->svcX:label
else
route->svcX
The first question - is this (or something like) even possible with istio or linkerd
You can do that using Istio VirtualService
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
Read more here.
Yes you can rout the request based on a header with Istion & Linkerd
For istio there is nice article : https://dwdraju.medium.com/simplified-header-based-routing-with-istio-for-http-grpc-traffic-ff9be55f83ca
in istio's virtual service you can update the header like :
http:
- match:
- headers:
x-svc-env:
regex: v2
For linkerd :
Kind = "service-router"
Name = "service"
Routes = [
{
Match {
HTTP {
PathPrefix = "/api/service/com.example.com.PingService"
}
}
Destination {
Service = "pinging"
},
},
{
Match {
HTTP {
PathPrefix = "/api/service/com.example.com.PingService"
Header = [
{
Name = "x-version"
Exact = "2"
},
]
}
}
Destination {
Service = "pinging"
ServiceSubset = "v2"
},
}

Spring OAuth2 Keycloak Kubernetes internal/external access

I have Keycloak (10.0.3) server configured inside a Kubernetes Cluster.
The keycloak server has to handle authentification for external user (using an external url) and also handle oauth2 token for Spring microservices communications.
Then web application spring services uses oidc providers :
spring:
security:
oauth2:
client:
provider:
oidc:
issuer-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm
authorization-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm/protocol/openid-connect/auth
jwk-set-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm/protocol/openid-connect/certs
token-uri: http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm/protocol/openid-connect/token
user-name-attribute: preferred_username
The external URL of keycloak is https://keycloak.localhost, managed by ingress redirection handled by Traefik v2
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: keycloak-https
namespace: keycloak-cluster
annotations:
traefik.frontend.passHostHeader: "true"
spec:
entryPoints:
- websecure
routes:
- match: Host(`keycloak.localhost`)
kind: Rule
services:
- name: keycloak-cluster-http
port: 80
tls:
options:
name: mytlsoption
namespace: traefik
store:
name: default
I can access Keycloak using https://keycloak.localhost, no problem, it works.
The problem is that when I try to access my web application, it will always redirect to 'http://keycloak-cluster-http.keycloak-cluster.svc.cluster.local/auth/realms/myrealm', which is not resolved outside k8s.
If I change issuer-uri to http://keycloak.localhost then it doesn't work as keycloak.locahost is not resolved inside k8s.
I tried to set the KEYCLOAK_FRONTEND_URL to https://keycloak.localhost/auth, but no change.
Please, does someone has the same kind of settings and managed to make it working ?
Best regards
Managed to fix it using coredns and adding a rewrite rule... :
rewrite name keycloak.localhost keycloak-cluster-http.keycloak-cluster.svc.cluster.local
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
rewrite name keycloak.localhost keycloak-cluster-http.keycloak-cluster.svc.cluster.local
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
The authorization_uri needs to be understood by the browser since that URI is processed in the front channel. The rest of the URIs are processed in the back channel.
Because of that, the authorization_uri should use the front channel way of addressing the authorization server:
authorization_uri: https://keycloak.localhost/auth/realms/myrealm/protocol/openid-connect/auth
EDIT Based on Joe Grandja's input below, it appears that it's also necessary to not specify the issuer-uri property. The issuer-uri property is a shortcut for specifying the other URIs, and since you are specifying those, you don't need it anyway.
Here A POC that helped me with the issue.
Similar configuration ,keycloak and spring gateway are in kubernetes
The external user uses keycloak external host with https protocol
https://external-https/auth/realms/myrealm/protocol/openid-connect/auth?...
The ingress break the https and moves it to http + change the host to internal-http
gateway uses internal-http to connect to keycloakon port 8080
In order for the issuer to be the same protocol as the external the configuration uses https in user-info-uri and authorization-uri but the rest are http
make sure that the keycloak pod is open for https connection (8443)
authorization-uri: https://internal-http:8443/auth/realms/myrealm/protocol/openid-connect/auth
user-info-uri: https://internal-http:8443/auth/realms/myrealm/protocol/openid-connect/userinfo
issuer-uri: http://internal-http:8080/auth/realms/myrealm
To fix the host part of the issuer
In the gateway code I updated the following based on https://github.com/spring-projects/spring-security/issues/8882#user-content-oauth2-client
#SneakyThrows
private WebClient webClient() {
SslContext sslContext = SslContextBuilder
.forClient()
.trustManager(InsecureTrustManagerFactory.INSTANCE)
.build();
HttpClient httpClient = HttpClient.create()
.secure(t -> t.sslContext(sslContext))
.wiretap(true)
;
ReactorClientHttpConnector conn = new ReactorClientHttpConnector(httpClient);
return WebClient.builder()
.defaultHeader("HOST", "external-https")
.clientConnector(conn)
.build();
}
#Bean
WebClientReactiveAuthorizationCodeTokenResponseClient webClientReactiveAuthorizationCodeTokenResponseClient() {
final WebClientReactiveAuthorizationCodeTokenResponseClient webClientReactiveAuthorizationCodeTokenResponseClient = new WebClientReactiveAuthorizationCodeTokenResponseClient();
final WebClient webClient = webClient();
webClientReactiveAuthorizationCodeTokenResponseClient.setWebClient(webClient);
return webClientReactiveAuthorizationCodeTokenResponseClient;
}
#Bean
WebClientReactiveClientCredentialsTokenResponseClient webClientReactiveClientCredentialsTokenResponseClient() {
final WebClientReactiveClientCredentialsTokenResponseClient webClientReactiveClientCredentialsTokenResponseClient = new WebClientReactiveClientCredentialsTokenResponseClient();
final WebClient webClient = webClient();
webClientReactiveClientCredentialsTokenResponseClient.setWebClient(webClient);
return webClientReactiveClientCredentialsTokenResponseClient;
}
#Bean
WebClientReactiveRefreshTokenTokenResponseClient webClientReactiveRefreshTokenTokenResponseClient() {
final WebClientReactiveRefreshTokenTokenResponseClient webClientReactiveRefreshTokenTokenResponseClient = new WebClientReactiveRefreshTokenTokenResponseClient();
final WebClient webClient = webClient();
webClientReactiveRefreshTokenTokenResponseClient.setWebClient(webClient);
return webClientReactiveRefreshTokenTokenResponseClient;
}
#Bean
WebClientReactivePasswordTokenResponseClient webClientReactivePasswordTokenResponseClient() {
final var client = new WebClientReactivePasswordTokenResponseClient();
final WebClient webClient = webClient();
client.setWebClient(webClient);
return client;
}
#Bean
DefaultReactiveOAuth2UserService reactiveOAuth2UserService() {
final DefaultReactiveOAuth2UserService userService = new DefaultReactiveOAuth2UserService();
final WebClient webClient = webClient();
userService.setWebClient(webClient);
return userService;
}
Disabled the certificate validation - the connection is only between keycloak and gateway , both are in the kubernetes and otherwise would have used http connection, if not for this issue
The host part tells the keyclock what is the host to use for the issuer
Another issue encountered is that the location return when redirecting to authentication contains the internal url and not the external which the outside world doesn't know of
For that ,update the location that returns from the gateway
#Bean
public SecurityWebFilterChain springSecurityFilterChain(ServerHttpSecurity http)
...
oauth2Login(oAuth2LoginSpec -> oAuth2LoginSpec
...
.addFilterAfter(new LoginLocationFilter("external-https"), SecurityWebFiltersOrder.LAST)
...
public class LoginLocationFilter implements WebFilter {
private final String externalUrl;
public LoginLocationFilter(String externalUrl) {
this.externalUrl = externalUrl;
}
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
//before commit ,otherwise the headers will be read only
exchange.getResponse().beforeCommit(() -> {
fixLocation(exchange);
return Mono.empty();
});
return chain.filter(exchange);
}
...

Why I can't change the administrative state of a port on cisco apic via rest API

When I try to change the administrative state of a port on the Cisco APIC via rest API (aci_rest) then I'm getting the following error :
"msg": "APIC Error 170: Invalid access, MO: l1PhysIf",
"status": -1
Does anyone have any idea about that?
Thanks in advance.
- name: Change admin state of the port
aci_rest:
hostname: "{{ inventory_hostname }}"
username: "{{ aci_user }}"
password: "{{ aci_password }}"
validate_certs: no
path: "/api/node/mo/topology/pod-{{ pod_id }}/node-{{ node_id }}/sys/phys-[eth{{ interface }}].json"
method: post
content:
{
"l1PhysIf": {
"attributes": {
"adminSt":"down",
}
}
}
I've solved the problem. Cisco has restricted "l1PhysIf" object and there is documentation which looks like:
Class l1: PhysIf (CONCRETE)
Class ID:3627
Class Label: Layer 1 Physical Interface Configuration
Encrypted: false - Exportable: false - Persistent: true - Configurable: false - Subject to Quota: Disabled - Abstraction Layer: Concrete Model - APIC NX Processing: Disabled
Write Access: [NON CONFIGURABLE]
I've used "fabricRsOosPath" instead and it has worked.
Same thing here, but just trying to add a description. I get locking down a physical port object. Kinda. But a description?
The class mentioned above doesn't have any attribs that apply to a port description. Anyone have success at this?
I'm currently just using Postman to test, but will eventually throw this into python
API Post (port 10 for example):
https://{{url}}/api/node/mo/topology/pod-1/node-101/sys/phys-[eth1/10].json
JSON body:
{
"l1PhysIf":{
"attributes":{
"descr": "CHANGED"
}
}
}
Response: 400 Bad Request
{
"error": {
"attributes": {
"code": "170",
"text": "Invalid access, MO: l1PhysIf"
}
}
}

Azure Service Fabric IPv6 networking issues

We are having issues deploying our Service Fabric cluster to Azure and have it handle both IPv4
and IPv6 traffic.
We are developing an application that have mobile clients on iOS and Android which communicate with
our Service Fabric cluster. The communication consist of both HTTP traffic as well as TCP Socket communication.
We need to support IPv6 in order to have Apple accept the app in their App Store.
We are using ARM template for deploying to Azure as it seems the portal does not support configuring
load balancer with IPv6 configuration for Virtual Machine Scale Sets (ref: url). The linked page also states other limitations
to the IPv6 support, such as private IPv6 addresses cannot be deployed to VM Scale Sets. However according
to this page the possibility to assign private IPv6 to VM Scale Sets is available in preview
(although this was last updated 07/14/2017).
For this question I have tried to keep this as general as possible, and based the ARM template on a template found
in this tutorial. The template is called "template_original.json" and can be downloaded from
here. This is a basic template for a service fabric cluster with no security for simplicity.
I will be linking the entire modified ARM template in the bottom of this post, but will highlight the
main modified parts first.
Public IPv4 and IPv6 addresses that are associated with the load balancer. These are associated with their respective backend pools:
"frontendIPConfigurations": [
{
"name": "LoadBalancerIPv4Config",
"properties": {
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(parameters('lbIPv4Name'),'-','0'))]"
}
}
},
{
"name": "LoadBalancerIPv6Config",
"properties": {
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(parameters('lbIPv6Name'),'-','0'))]"
}
}
}
],
"backendAddressPools": [
{
"name": "LoadBalancerIPv4BEAddressPool",
"properties": {}
},
{
"name": "LoadBalancerIPv6BEAddressPool",
"properties": {}
}
],
Load balancing rules for frontend ports on respective public IP addresses, both IPv4 and IPv6.
This amounts to four rules in total, two per front end port. I have added port 80 for HTTP here and port 5607 for Socket connection.
Note that I have updated the backend port for IPv6 port 80 to be 8081 and IPv6 port 8507 to be 8517.
{
"name": "AppPortLBRule1Ipv4",
"properties": {
"backendAddressPool": {
"id": "[variables('lbIPv4PoolID0')]"
},
"backendPort": "[parameters('loadBalancedAppPort1')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPv4Config0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort1')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe1')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule1Ipv6",
"properties": {
"backendAddressPool": {
"id": "[variables('lbIPv6PoolID0')]"
},
/*"backendPort": "[parameters('loadBalancedAppPort1')]",*/
"backendPort": 8081,
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPv6Config0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort1')]",
/*"idleTimeoutInMinutes": "5",*/
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe1')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule2Ipv4",
"properties": {
"backendAddressPool": {
"id": "[variables('lbIPv4PoolID0')]"
},
"backendPort": "[parameters('loadBalancedAppPort2')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPv4Config0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort2')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe2')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule2Ipv6",
"properties": {
"backendAddressPool": {
"id": "[variables('lbIPv6PoolID0')]"
},
"backendPort": 8517,
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPv6Config0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort2')]",
/*"idleTimeoutInMinutes": "5",*/
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe2')]"
},
"protocol": "tcp"
}
}
Also added one probe per load balancing rule, but omitted here for clarity.
The apiVerison for VM Scale set is set to "2017-03-30" per recommendation from aforementioned preview solution.
The network interface configurations are configured according to recommendations as well.
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('nicName'), '-0')]",
"properties": {
"ipConfigurations": [
{
"name": "[concat(parameters('nicName'),'-IPv4Config-',0)]",
"properties": {
"privateIPAddressVersion": "IPv4",
"loadBalancerBackendAddressPools": [
{
"id": "[variables('lbIPv4PoolID0')]"
}
],
"loadBalancerInboundNatPools": [
{
"id": "[variables('lbNatPoolID0')]"
}
],
"subnet": {
"id": "[variables('subnet0Ref')]"
}
}
},
{
"name": "[concat(parameters('nicName'),'-IPv6Config-',0)]",
"properties": {
"privateIPAddressVersion": "IPv6",
"loadBalancerBackendAddressPools": [
{
"id": "[variables('lbIPv6PoolID0')]"
}
]
}
}
],
"primary": true
}
}
]
With this template I am able to successfully deploy it to Azure. Communication using IPv4 with the
cluster works as expected, however I am unable to get any IPv6 traffic through at all. This is the
same for both ports 80 (HTTP) and 5607 (socket).
When viewing the list of backend pools for the load balancer in the Azure portal it displays the
following information message which I have been unable to find any information about. I am unsure
if this affects anything in any way?
Backend pool 'loadbalanceripv6beaddresspool' was removed from Virtual machine scale set 'Node1'. Upgrade all the instances of 'Node1' for this change to apply Node1
load balancer error message
I am not sure why I cannot get the traffic through on IPv6. It might be that there is something I
have missed in the template, or some other error on my part? If any additional information is required
dont hesitate to ask.
Here is the entire ARM template. Due to the length and post length limitations I have not embedded it, but here is a Pastebin link to the full ARM Template (Updated).
Update
Some information regarding debugging the IPv6 connectivity. I have tried slightly altering the ARM template to forward the IPv6 traffic on port 80 to backend port 8081 instead. So IPv4 is 80=>80 and IPv6 80=>8081. The ARM template has been updated (see link in previous section).
On port 80 I am running Kestrel as a stateless web server. I have the following entries in the ServiceManifest.xml:
<Endpoint Protocol="http" Name="ServiceEndpoint1" Type="Input" Port="80" />
<Endpoint Protocol="http" Name="ServiceEndpoint3" Type="Input" Port="8081" />
I have been a bit unsure specifically which addresses to listen for in Kestrel. Using FabricRuntime.GetNodeContext().IPAddressOrFQDN always returns the IPv4 address. This is currently how we start it. For debugging this I currently get ahold of all the IPv6 addresses, and hardcoded hack for port 8081 we use that address. Fort port 80 use IPAddress.IPv6Any, however this always defaults to the IPv4 address returned by FabricRuntime.GetNodeContext().IPAddressOrFQDN.
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
var endpoints = Context.CodePackageActivationContext.GetEndpoints()
.Where(endpoint => endpoint.Protocol == EndpointProtocol.Http ||
endpoint.Protocol == EndpointProtocol.Https);
var strHostName = Dns.GetHostName();
var ipHostEntry = Dns.GetHostEntry(strHostName);
var ipv6Addresses = new List<IPAddress>();
ipv6Addresses.AddRange(ipHostEntry.AddressList.Where(
ipAddress => ipAddress.AddressFamily == AddressFamily.InterNetworkV6));
var listeners = new List<ServiceInstanceListener>();
foreach (var endpoint in endpoints)
{
var instanceListener = new ServiceInstanceListener(serviceContext =>
new KestrelCommunicationListener(
serviceContext,
(url, listener) => new WebHostBuilder().
UseKestrel(options =>
{
if (endpoint.Port == 8081 && ipv6Addresses.Count > 0)
{
// change idx to test different IPv6 addresses found
options.Listen(ipv6Addresses[0], endpoint.Port);
}
else
{
// always defaults to ipv4 address
options.Listen(IPAddress.IPv6Any, endpoint.Port);
}
}).
ConfigureServices(
services => services
.AddSingleton<StatelessServiceContext>(serviceContext))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
.UseStartup<Startup>()
.UseUrls(url)
.Build()), endpoint.Name);
listeners.Add(instanceListener);
}
return listeners;
}
Here is the endpoints shown in the Service Fabric Explorer for one of the nodes: Endpoint addresses
Regarding the socket listener I have also altered so that IPv6 is forwarded to backend port 8517 instead of 8507. Similarily as with Kestrel web server the socket listener will open two listening instances on respective addresses with appropriate port.
I hope this information is of any help.
It turns out I made a very stupid mistake that is completely my fault, I forgot to actually verify that my ISP fully supports IPv6. Turns out they don't!
Testing from a provider with full IPv6 support works as it should and I am able to get full connectivity to the nodes in the Service Fabric cluster.
Here is the working ARM template for anyone that needs a fully working example of Service Fabric cluster with IPv4 and IPv6 support:
Not allowed to post pastebin links without a accompanied code snippet...
Update:
Due to length constraints the template could not be pasted in this thread in its entirety, however over on the GitHub Issues page for Service Fabric I crossposted this. The ARM template is posted as a comment in that thread, it will hopefully be available longer than the pastebin link. View it here.

ECS and Application Load Balancer

Ive been looking for some information on Cloud Formation with regards to creating a stack with ECS and ELB (Application Load Balancer) but unable to do so.
I have created two Docker images each containing a Node.js microservice that listens on ports 3000 and 4000. How do I go about creating my stack with ECS and ELB as mentioned ? I assume the Application Load Balancer can be configured to listen to both these ports ?
A sample Cloud Formation template would really help.
The Application Load Balancer can be used to load traffic across the ECS tasks in your service(s). The Application Load Balancer has two cool features that you can leverage; dynamic port mapping (port on host is auto-assigned by ECS/Docker) allowing you to run multiple tasks for the same service on a single EC2 instance and path-based routing allowing you to route incoming requests to different services depending on patterns in the URL path.
To wire it up you need first to define a TargetGroup like this
"TargetGroupService1" : {
"Type" : "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties" : {
"Port": 10,
"Protocol": "HTTP",
"HealthCheckPath": "/service1",
"VpcId": {"Ref" : "Vpc"}
}
}
If you are using dynamic port mapping, the port specified in the target group is irrelevant since it will be overridden by the dynamically allocated port for each target.
Next you define a ListenerRule that defines the path that shall be routed to the TargetGroup:
"ListenerRuleService1": {
"Type" : "AWS::ElasticLoadBalancingV2::ListenerRule",
"Properties" : {
"Actions" : [
{
"TargetGroupArn" : {"Ref": "TargetGroupService1"},
"Type" : "forward"
}
],
"Conditions" : [
{
"Field" : "path-pattern",
"Values" : [ "/service1" ]
}
],
"ListenerArn" : {"Ref": "Listener"},
"Priority" : 1
}
}
Finally you associate your ECS Service with the TargetGroup. This enable ECS to automatically register your task containers as targets in the target group (with the host port that you have configured in your TaskDefinition)
"Service1": {
"Type" : "AWS::ECS::Service",
"DependsOn": [
"ListenerRuleService1"
],
"Properties" : {
"Cluster" : { "Ref" : "ClusterName" },
"DesiredCount" : 2,
"Role" : "/ecsServiceRole",
"TaskDefinition" : {"Ref":"Task1"},
"LoadBalancers": [
{
"ContainerName": "Task1",
"ContainerPort": "8080",
"TargetGroupArn" : { "Ref" : "TargetGroupService1" }
}
]
}
}
You can find more details in a blog post I have written about this, see Amazon ECS and Application Load Balancer
If you're interested in doing this via https://www.terraform.io/ here's an example for two apps that share a domain:
https://ratelim.it => the Rails App running on container port 8100
https://ratelim.it/api => the Java API running on container port 8080
This example supports http & https, and splits traffic between your apps based on the url prefix.
my_app_task.json
"portMappings": [
{
"hostPort": 0,
"containerPort": 8100,
"protocol": "tcp"
}
],
my_api_task.json
"portMappings": [
{
"hostPort": 0,
"containerPort": 8080,
"protocol": "tcp"
}
],
Terraform code:
## ALB for both
resource "aws_alb" "app-alb" {
name = "app-alb"
security_groups = [
"${aws_security_group.albs.id}"]
}
## ALB target for app
resource "aws_alb_target_group" "my_app" {
name = "my_app"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.myvpc.id}"
deregistration_delay = 30
health_check {
protocol = "HTTP"
path = "/healthcheck"
healthy_threshold = 2
unhealthy_threshold = 2
interval = 90
}
}
## ALB Listener for app
resource "aws_alb_listener" "my_app" {
load_balancer_arn = "${aws_alb.app-alb.id}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.my_app.id}"
type = "forward"
}
}
## ALB Listener for app https
resource "aws_alb_listener" "my_app_https" {
load_balancer_arn = "${aws_alb.app-alb.id}"
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2015-05"
certificate_arn = "${data.aws_acm_certificate.my_app.arn}"
default_action {
target_group_arn = "${aws_alb_target_group.my_app.id}"
type = "forward"
}
}
## ALB Target for API
resource "aws_alb_target_group" "my_api" {
name = "myapi"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.myvpc.id}"
deregistration_delay = 30
health_check {
path = "/api/v1/status"
healthy_threshold = 2
unhealthy_threshold = 2
interval = 90
}
}
## ALB Listener Rule for API
resource "aws_alb_listener_rule" "api_rule" {
listener_arn = "${aws_alb_listener.my_app.arn}"
priority = 100
action {
type = "forward"
target_group_arn = "${aws_alb_target_group.my_api.arn}"
}
condition {
field = "path-pattern"
values = [
"/api/*"]
}
}
## ALB Listener RUle for API HTTPS
resource "aws_alb_listener_rule" "myapi_rule_https" {
listener_arn = "${aws_alb_listener.app_https.arn}"
priority = 100
action {
type = "forward"
target_group_arn = "${aws_alb_target_group.myapi.arn}"
}
condition {
field = "path-pattern"
values = [
"/api/*"]
}
}
## APP Task
resource "aws_ecs_task_definition" "my_app" {
family = "my_app"
container_definitions = "${data.template_file.my_app_task.rendered}"
}
## App Service
resource "aws_ecs_service" "my_app-service" {
name = "my_app-service"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.my_app.arn}"
iam_role = "${aws_iam_role.ecs_role.arn}"
depends_on = [
"aws_iam_role_policy.ecs_service_role_policy"]
load_balancer {
target_group_arn = "${aws_alb_target_group.my_app.id}"
container_name = "my_app"
container_port = 8100
}
}
## API Task
resource "aws_ecs_task_definition" "myapi" {
family = "myapi"
container_definitions = "${data.template_file.myapi_task.rendered}"
}
## API Servcice
resource "aws_ecs_service" "myapi-service" {
name = "myapi-service"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.myapi.arn}"
iam_role = "${aws_iam_role.ecs_role.arn}"
depends_on = [
"aws_iam_role_policy.ecs_service_role_policy"]
load_balancer {
target_group_arn = "${aws_alb_target_group.myapi.id}"
container_name = "myapi"
container_port = 8080
}
}
Are you trying to rebuild the entire ECS stack in CF? If you can live with pre-defined clusters, you can just register the instances with user data when they spin up (I use spot fleet, but this should work anywhere you're starting an instance). Something like this in your LaunchSpecifications:
"UserData":
{ "Fn::Base64" : { "Fn::Join" : [ "", [
"#!/bin/bash\n",
"yum update -y\n",
"echo ECS_CLUSTER=YOUR_CLUSTER_NAME >> /etc/ecs/ecs.config\n",
"yum install -y aws-cli\n",
"aws ec2 create-tags --region YOUR_REGION --resources $(curl http://169.254.169.254/latest/meta-data/instance-id) --tags Key=Name,Value=YOUR_INSTANCE_NAME\n"
]]}}
I know it's not pure Infrastructure as Code, but it gets the job done with minimal effort, and I my cluster configs don't really change a lot.