AWS ELB redirect HTTP to HTTPS - aws-cloudformation

I am using this CloudFormation template https://github.com/widdix/aws-cf-templates/blob/master/jenkins/jenkins2-ha-agents.yaml to setup a jenkins server.
I want to now add an SSL to the ELB and have modified https://github.com/widdix/aws-cf-templates/blob/master/jenkins/jenkins2-ha-agents.yaml#L511-L519 to the following:
MasterELBListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: "redirect"
RedirectConfig:
Protocol: "HTTPS"
Port: "443"
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
StatusCode: "HTTP_301"
LoadBalancerArn: !Ref MasterELB
Port: 80
Protocol: HTTP
MasterHTTPSListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Certificates:
# - CertificateArn: !Ref CertificateARN
- CertificateArn: !FindInMap
- SSLmapping
- ssl1
- !FindInMap
- AWSRegionsNameMapping
- !Ref 'AWS::Region'
- RegionName
DefaultActions:
- Type: forward
TargetGroupArn: !Ref MasterELBTargetGroup
LoadBalancerArn: !Ref MasterELB
Port: 443
Protocol: HTTPS
But when I try to to access the site, it just times.
Any advice is much appreciated

ok, i needed to open access to 433 from the ELB, with:
MasterELBHTTPSSGInWorld:
Type: 'AWS::EC2::SecurityGroupIngress'
Condition: HasNotAuthProxySecurityGroup
Properties:
GroupId: !Ref MasterELBSG
IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: '0.0.0.0/0'

Related

AWS ECS Fargate service deployment is failing "Invalid request provided: CreateService error: Container Port is missing "

I am deploying my containerized spring boot app on ECS Fargate using cloud formation templates.
Note I am using internal ALB with a Target group of type IP.
My TAskDefinition is good but the service stack gives the below error while creating the stack.
Resource handler returned message: "Invalid request provided: CreateService error: Container Port is missing (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException; Request ID: XXX-XXX-XXX; Proxy: null)" (RequestToken: xxx-xxx-xxx, HandlerErrorCode: InvalidRequest)
Does anyone know what this error says?
I have specified a container with port in the task definition
My template
AWSTemplateFormatVersion: "2010-09-09"
Description: "CloudFormation template for creating a task definition"
Parameters:
taskDefName:
Type: String
Default: 'task-def'
springActiveProfile:
Type: String
Default: 'Dev'
appDefaultPort:
Type: Number
Default: 3070
heapMemLimit:
Type: String
Default: "-Xms512M -Xmx512M"
Resources:
MyTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
RequiresCompatibilities:
- "FARGATE"
Family: !Ref taskDefName
NetworkMode: "awsvpc"
RuntimePlatform:
CpuArchitecture: X86_64
OperatingSystemFamily: LINUX
ExecutionRoleArn: "xxxxx"
Cpu: 0.25vCPU
Memory: 0.5GB
ContainerDefinitions:
- Name: "container1"
Image: xxx
MemoryReservation: 128
Memory: 512
PortMappings:
- ContainerPort: 3070
Protocol: tcp
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: 'ecs'
awslogs-region: us-east-1
awslogs-stream-prefix: 'spec'
OneService:
Type: AWS::ECS::Service
Properties:
LaunchType: FARGATE
TaskDefinition: !Ref MyTaskDefinition
Cluster: "clusterName"
DesiredCount: 2
DeploymentConfiguration:
MaximumPercent: 100
MinimumHealthyPercent: 70
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- subnet-xxx
- subnet-xxx
SecurityGroups:
- sg-xxx
LoadBalancers:
- ContainerName: container1
- ContainerPort: 3070
- TargetGroupArn: arn:xxx
This was due to the YAML format.
Incorrect
LoadBalancers:
- ContainerName: container1
- ContainerPort: 3070
- TargetGroupArn: arn:xxx
Correct
LoadBalancers:
- ContainerName: container1
ContainerPort: 3070
TargetGroupArn: arn:xxx

server closed the stream without sending trailers

I’m trying o communicate from Envoy to Envoy using gRPC for Kubernetes(Amazon EKS).
I have an envoy in my sidecar and I am using grpcurl to validate the request.
The request is delivered to the application container and there are no errors, but the console returns the following results
server closed the stream without sending trailers
I don’t know what the reason for the above problem is, and what could be the reason for this result??
I was able to confirm that the response came back fine when I hit a single service before connecting with envoy.
this my envoy config
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 10000
static_resources:
listeners:
- name: listener_secure_grpc
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service_grpc
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: cluster_grpc
max_stream_duration:
grpc_timeout_header_max: 30s
tracing: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext"
common_tls_context:
tls_certificates:
- certificate_chain:
filename: /etc/ssl/grpc/tls.crt
private_key:
filename: /etc/ssl/grpc/tls.key
- name: listener_stats
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10001
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config:
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: /stats
route:
cluster: cluster_admin
http_filters:
- name: envoy.filters.http.router
- name: listener_healthcheck
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10010
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
clusters:
- name: cluster_grpc
connect_timeout: 1s
type: STATIC
http2_protocol_options: {}
upstream_connection_options:
tcp_keepalive: {}
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 1443
- name: cluster_admin
connect_timeout: 1s
type: STATIC
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 10000
P.S 2021.03.19
Here's what else I found out.
When I request from the ingress host, I get the above failure, but when I request from the service, I get a normal response!

Creating blue green deployment in Cloud Formation (One Load Balancer 2 target groups

I am trying to create a cloudformation IaC for an app to do blue green deployment. It keep giving me The target group with targetGroupArn arn:aws:elasticloadbalancing:ap-xxx-9:000:targetgroup/master-tg-2 does not have an associated load balancer.
I wonder where did I go wrong. I add a DependsOn the masterLB listener just as stated in this question. I also link up both target groups in the MasterECSServices
The following is the cloudformation template
MasterLBSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Access to the public facing load balancer
VpcId:
Fn::ImportValue: # TAS-dev:VPCId
!Sub "${TasStackName}:VPCId"
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
IpProtocol: tcp
FromPort: 8000
ToPort: 8000
MasterLB:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: Master-Dev-LB
Scheme: internet-facing
LoadBalancerAttributes:
- Key: idle_timeout.timeout_seconds
Value: '30'
Subnets:
- !Sub "${StackName}:PublicSubnetOne"
- !Sub "${StackName}:PublicSubnetTwo"
SecurityGroups: [!Ref 'MasterLBSG']
MasterLBListener:
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn:
- MasterLB
Properties:
DefaultActions:
- TargetGroupArn: !Ref 'MasterTGOne'
Type: 'forward'
LoadBalancerArn: !Ref 'MasterLB'
Port: 8000
Protocol: HTTP
MasterTGOne: # Means MasterTargetGroupOne
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: master-tg-1
Port: 8000
Protocol: HTTP
VpcId:"${TasStackName}:VPCId"
TargetType: ip
## to be used as a spare TargetGroup for blue green deployment
MasterTGTwo: # Means MasterTargetGroupOne
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: master-tg-2
Port: 8000
Protocol: HTTP
VpcId:"${TasStackName}:VPCId"
TargetType: ip
MasterECSServices:
Type: AWS::ECS::Service
DependsOn:
- MasterLBListener
Properties:
Cluster:"${TasStackName}:ClusterName"
DeploymentController:
Type: CODE_DEPLOY
DesiredCount: 1
LaunchType: FARGATE
LoadBalancers:
- ContainerName: master-app
ContainerPort: '8000'
TargetGroupArn: !Ref 'MasterTGOne'
- ContainerName: master-app
ContainerPort: '8000'
TargetGroupArn: !Ref 'MasterTGTwo'
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref MasterAppSG
Subnets:
- "${TasStackName}:PrivateSubnetOne"
- "${TasStackName}:PrivateSubnetTwo"
Role:"${TasStackName}:ECSRole"
TaskDefinition: !Ref 'MasterTaskDef'
Update:
Since May 19, 2020 AWS CloudFormation now supports blue/green deployments for Amazon ECS
Before
An example of a custom resource in CloudFormation which makes blue/green deployment for ECS. It uses crhelper:
Lambda which creates blue/green deployment group for ECS (i.e. logic of your custom resource)
import logging
import json
import boto3
from time import sleep
from crhelper import CfnResource
logger = logging.getLogger(__name__)
# Initialise the helper, all inputs are optional,
# this example shows the defaults
helper = CfnResource(json_logging=False,
log_level='DEBUG',
boto_level='CRITICAL',
sleep_on_delete=120)
try:
## Init code goes here
cd = boto3.client('codedeploy')
pass
except Exception as e:
helper.init_failure(e)
#helper.create
def create(event, context):
logger.info("Got Create")
print(json.dumps(event))
application_name = event['ResourceProperties']['ApplicationName']
service_role_arn = event['ResourceProperties']['ServiceRoleArn']
cluster_name = event['ResourceProperties']['ClusterName']
service_name = event['ResourceProperties']['ServiceName']
elb_name = event['ResourceProperties']['ELBName']
tg1_name = event['ResourceProperties']['TG1Name']
tg2_name = event['ResourceProperties']['TG2Name']
listener_arn = event['ResourceProperties']['ListenerArn']
deployment_group_name = event['ResourceProperties']['GroupName']
deployment_style=event['ResourceProperties'].get(
'DeploymentStyle', 'BLUE_GREEN')
response = cd.create_deployment_group(
applicationName=application_name,
deploymentGroupName=deployment_group_name,
serviceRoleArn=service_role_arn,
autoRollbackConfiguration={
'enabled': True,
'events': ['DEPLOYMENT_FAILURE']
},
deploymentStyle={
'deploymentType': deployment_style,
'deploymentOption': 'WITH_TRAFFIC_CONTROL'
},
blueGreenDeploymentConfiguration={
"terminateBlueInstancesOnDeploymentSuccess": {
"action": "TERMINATE",
"terminationWaitTimeInMinutes": 0
},
"deploymentReadyOption": {
"actionOnTimeout": "CONTINUE_DEPLOYMENT",
"waitTimeInMinutes": 0
}
},
loadBalancerInfo={
"targetGroupPairInfoList": [
{
"targetGroups": [
{"name": tg1_name},
{"name": tg2_name}
],
"prodTrafficRoute": {
"listenerArns": [listener_arn]
}
}
]
},
ecsServices=[
{
"serviceName": service_name,
"clusterName": cluster_name
}
]
)
print(response)
helper.Data.update({"Name": deployment_group_name})
cd_group_id = response['deploymentGroupId']
return cd_group_id
#helper.delete
def delete(event, context):
# Delete never returns anything. Should not fail if the
# underlying resources are already deleted.
# Desired state.
logger.info("Got Delete")
print(json.dumps(event))
try:
application_name = event['ResourceProperties']['ApplicationName']
deployment_group_name = event['ResourceProperties']['GroupName']
response = cd.delete_deployment_group(
applicationName=application_name,
deploymentGroupName=deployment_group_name
)
print(response)
except Exception as e:
print(str(e))
def handler(event, context):
helper(event, context)
Execute the lambda from CloudFomration
Once you set up your lambda, then in CloudFormation you can use it as any other "normal" resource:
MyUseCustomLambda:
Type: Custom::CodeDeployCustomGroup
Version: "1.0"
Properties:
Name: UseCustomLambda
ServiceToken: !Ref CustomLambdaArn
ApplicationName: !Ref ApplicationName
ServiceRoleArn: !Ref ServiceRoleArn
ELBName: !Ref ELBName
TG1Name: !Ref TG1Name
TG2Name: !Ref TG2Name
GroupName: !Ref GroupName
ClusterName: !Ref ClusterName
ServiceName: !Ref ServiceName
ListenerArn: !Ref ListenerArn
DeploymentStyle: !Ref DeploymentStyle

SERVICE UNAVAILABLE - No raft leader when trying to create channel in Hyperledger fabric setup in Kubernetes

Start_orderer.sh file:
#edit *values.yaml file to be used with helm chart and deploy orderer through it
consensus_type=etcdraft
#change below instantiated variable for changing configuration of persistent volume sizes
persistence_status=true
persistent_volume_size=2Gi
while getopts "i:o:O:d:" c
do
case $c in
i) network_id=$OPTARG ;;
o) number=$OPTARG ;;
O) org_name=$OPTARG ;;
d) domain=$OPTARG ;;
esac
done
network_path=/etc/zeeve/fabric/${network_id}
source status.sh
cp ../yaml-files/orderer.yaml $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
sed -i "s/persistence_status/$persistence_status/; s/persistent_volume_size/$persistent_volume_size/; s/consensus_type/$consensus_type/; s/number/$number/g; s/org_name/${org_name}/; s/domain/$domain/; " $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
helm install orderer-${number}${org_name} --namespace blockchain-${org_name} -f $network_path/yaml-files/orderer-${number}${org_name}_values.yaml `pwd`/../helm-charts/hlf-ord
cmd_success $? orderer-${number}${org_name}
#update state of deployed componenet, used for pod level operations like start, stop, restart etc
update_statusfile helm orderer_${number}${org_name} orderer-${number}${org_name}
update_statusfile persistence orderer_${number}${org_name} $persistence_status
Configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
Organizations:
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.demointainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.demointainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.demointainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.demointainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.demointainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.demointainabs.emulya.com
Port: 443
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.intainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.intainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.intainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.intainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.intainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.intainabs.emulya.com
Port: 443
Orderer: &OrdererDefaults
OrdererType: etcdraft
Addresses:
- orderer1.originator.demointainabs.emulya.com:443
- orderer2.trustee.demointainabs.emulya.com:443
- orderer2.issuer.demointainabs.emulya.com:443
- orderer1.trustee.demointainabs.emulya.com:443
- orderer1.issuer.demointainabs.emulya.com:443
- orderer1.originator.intainabs.emulya.com:443
- orderer2.trustee.intainabs.emulya.com:443
- orderer2.issuer.intainabs.emulya.com:443
- orderer1.trustee.intainabs.emulya.com:443
- orderer1.issuer.intainabs.emulya.com:443
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-hlf.blockchain-kz.svc.cluster.local:9092
EtcdRaft:
Consenters:
- Host: orderer1.originator.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
- Host: orderer1.originator.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
BaseGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
Consortiums:
MyConsortium:
Organizations:
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
BaseChannel:
Consortium: MyConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
I am currently doing hyperledger fabric network setup in Kubernetes. My network includes, 6 organizations and 5 orderer nodes. Our orderers are made to follow raft consensus. I have done the following:
Setup ca and tlsca servers
Setup ingress controller
Generated crypto-materials for peers, orderer
Generated channel artifacts
-- Started peers and orderers
Next step is to create the channel on orderer for each orgs and join the peers in each org to the channel. I am unable to create the channel. When requesting to create the channel, getting the following error:
SERVICE UNAVAILABLE - No raft leader.
How to fix this issue??
Can anyone please guide me on this. Thanks in advance.

AWS Fargate - fails health check while instance is up

I am a similar question to some posts, but none of the specific issues relate as far as I can tell. I will post my stack later in this post.
I have:
ALB----->Listener->target group->Fargate service->task definition
80/http ->8080/http -> 8080/http
The problem is my health checks fail. When the Fargate task spins up an instance, I can go to that instance using the health check URL, and i get a 200 response. however, any attempt to go through the load balancer results in a gateway timeout.
$ curl -fv http://172.31.47.18:8080/healthz
* Trying 172.31.47.18...
* TCP_NODELAY set
* Connected to 172.31.47.18 (172.31.47.18) port 8080 (#0)
> GET /healthz HTTP/1.1
> Host: 172.31.47.18:8080
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Date: Sun, 24 Nov 2019 15:33:39 GMT
< Server: Warp/3.2.27
<
* Connection #0 to host 172.31.47.18 left intact
OK
However, the health check never passes on the LB.
the security group used for every thing right now is wide open. I wanted to eliminate that as an issue.
the fargate nodes are set up for public IPs.
This has been driving me crazy for the last couple of days. I stood up an EC2 backed ECS, and everything works on EC2. I should point out that the entire stack builds just fine in Fargate, except for not getting any traffic either from the load balancer or something.
The error in the service events says
service test-graph (port 8080) is unhealthy in target-group tg--test-graph due to (reason Request timed out).
Hopefully someone has an idea.
TaskDef0:
Type: AWS::ECS::TaskDefinition
DependsOn: Cluster0
Properties:
ExecutionRoleArn: arn:aws:iam::xxxxx:role/ECS_Hasura_Execution_Role
TaskRoleArn: arn:aws:iam::xxxxx:role/ecsTaskExecutionRole
Family: !Ref 'ServiceName'
Cpu: !FindInMap
- ContainerSizeMap
- !Ref ContainerSize
- Cpu
Memory: !FindInMap
- ContainerSizeMap
- !Ref ContainerSize
- Memory
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ContainerDefinitions:
- Name: !Ref 'ServiceName'
Cpu: !FindInMap
- ContainerSizeMap
- !Ref ContainerSize
- Cpu
Memory: !FindInMap
- ContainerSizeMap
- !Ref ContainerSize
- Memory
Image: !FindInMap
- ServiceMap
- !Ref ServiceProvider
- ImageUrl
PortMappings:
-
ContainerPort: !Ref 'ContainerPort'
HostPort: !Ref ContainerPort
Protocol: tcp
ALB0:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
DependsOn: TaskDef0
Properties:
Name: !Join
- '-'
- - lb-
- !Ref ServiceName
Scheme: internet-facing
IpAddressType: ipv4
LoadBalancerAttributes:
- Key: deletion_protection.enabled
Value: false
- Key: idle_timeout.timeout_seconds
Value: 60
- Key: routing.http.drop_invalid_header_fields.enabled
Value: false
- Key: routing.http2.enabled
Value: true
SecurityGroups:
- sg-xxxxxx # allow HTTP/HTTPS to the load balancer
Subnets:
- subnet-111111
- subnet-222222
- subnet-333333
Type: application
targetGroup0:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: ALB0
Properties:
Name: !Join
- '-'
- - tg-
- !Ref ServiceName
Port: !Ref TargetGroupPort
Protocol: !Ref TargetGroupProtocol
TargetType: ip
VpcId: !FindInMap
- ServiceMap
- !Ref ServiceProvider
- VpcId
# all other paraneters can be changed without interruption
HealthCheckPort: traffic-port
HealthCheckEnabled: !FindInMap
- LBTGMap
- Parameters
- HealthCheckEnabled
HealthCheckIntervalSeconds: !FindInMap
- LBTGMap
- Parameters
- HealthCheckIntervalSeconds
HealthCheckPath: !FindInMap
- ServiceMap
- !Ref ServiceProvider
- HealthCheckPath
HealthCheckProtocol: !FindInMap
- ServiceMap
- !Ref ServiceProvider
- HealthCheckProtocol
HealthCheckTimeoutSeconds: !FindInMap
- LBTGMap
- Parameters
- HealthCheckTimeoutSeconds
HealthyThresholdCount: !FindInMap
- LBTGMap
- Parameters
- HealthyThresholdCount
UnhealthyThresholdCount: !FindInMap
- LBTGMap
- Parameters
- UnhealthyThresholdCount
Matcher:
HttpCode: !FindInMap
- ServiceMap
- !Ref ServiceProvider
- HealthCheckSuccessCode
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: !FindInMap
- LBTGMap
- Parameters
- DeregistrationDelay
- Key: slow_start.duration_seconds
Value: !FindInMap
- LBTGMap
- Parameters
- SlowStart
- Key: stickiness.enabled
Value: !FindInMap
- LBTGMap
- Parameters
- Stickiness
Listener0:
# This is the fixed response test listener
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn: ALB0
Properties:
DefaultActions:
- Type: fixed-response
FixedResponseConfig:
ContentType: text/html
MessageBody: <h1>Working</h1><p>The load balancer test listener is operational</p>
StatusCode: 200
LoadBalancerArn: !Ref ALB0
Port: 9000
Protocol: HTTP
Listener1:
# This is the port 80 listener
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn: ALB0
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref targetGroup0
LoadBalancerArn: !Ref ALB0
Port: 80
Protocol: HTTP
Listener2:
# This is the port 8080 listener
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn: ALB0
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref targetGroup0
LoadBalancerArn: !Ref ALB0
Port: 8080
Protocol: HTTP
Listener3 :
# This is the port 443 listener
Type: AWS::ElasticLoadBalancingV2::Listener
DependsOn: ALB0
Properties:
Certificates:
- CertificateArn: !FindInMap
- CertificateMap
- !Ref AWS::Region
- CertifcateArn
DefaultActions:
- Type: forward
TargetGroupArn: !Ref targetGroup0
LoadBalancerArn: !Ref ALB0
Port: 443
Protocol: HTTPS
Service0:
Type: AWS::ECS::Service
DependsOn: Listener2
Properties:
ServiceName: !Ref 'ServiceName'
Cluster: !Ref Cluster0
LaunchType: FARGATE
DeploymentConfiguration:
MaximumPercent: !FindInMap
- ECSServiceMap
- Parameters
- MaximumPercent
MinimumHealthyPercent: !FindInMap
- ECSServiceMap
- Parameters
- MinimumHealthyPercent
DesiredCount: !Ref 'DesiredTaskCount'
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups: # this is allow all ports and IPs
- !FindInMap
- SecurityGroupMap
- !Ref AWS::Region
- sg0
Subnets:
- !FindInMap
- SubnetMap
- !Ref AWS::Region
- subnet0
- !FindInMap
- SubnetMap
- !Ref AWS::Region
- subnet1
- !FindInMap
- SubnetMap
- !Ref AWS::Region
- subnet2
TaskDefinition: !Ref 'TaskDef0'
LoadBalancers:
- ContainerName: !Ref 'ServiceName'
ContainerPort: !Ref 'ContainerPort'
TargetGroupArn: !Ref 'targetGroup0'
Tags:
- Key: Application
Value: !Ref "Application"
- Key: Customer
Value: !Ref "Customer"
- Key: Role
Value: !Ref "Role"
- Key: InternetAccessible
Value: !Ref "InternetAccessible"
- Key: CreationDate
Value: !Ref "CreationDate"
- Key: CreatedBy
Value: !Ref "CreatedBy"
Mappings:
ServiceMap:
GraphQL-Ohio:
ImageUrl: xxxxx.dkr.ecr.us-east-2.amazonaws.com/hasura/graphql-engine
HealthCheckPath: /healthz
HealthCheckSuccessCode: 200
HealthCheckProtocol: HTTP
VpcId: vpc-xxxxx
LBTGMap:
Parameters:
HealthCheckEnabled: True
HealthCheckIntervalSeconds: 30
HealthCheckTimeoutSeconds: 5
HealthyThresholdCount: 5
UnhealthyThresholdCount: 2
DeregistrationDelay: 300
SlowStart: 0
Stickiness: false
SubnetMap: # There is technical debt here to keep this up to date as subnets change
us-east-2:
subnet0: subnet-111111
subnet1: subnet-222222
subnet2: subnet-333333
SecurityGroupMap:
us-east-2:
sg0: sg-xxxxx
Ok - I figured this out. I had my HealthCheckPort set to traffic-port. The string literal "traffic-port", not the actual port number. Duh.