Missing Authentication Token on Unauthenticated Method - aws-api-gateway

I have the following Terraform for setting up a CORS method for my API on API Gateway:
resource "aws_api_gateway_method" "default" {
rest_api_id = "${var.rest_api_id}"
resource_id = "${var.resource_id}"
http_method = "OPTIONS"
authorization = "NONE"
}
resource "aws_api_gateway_method_response" "default" {
rest_api_id = "${var.rest_api_id}"
resource_id = "${var.resource_id}"
http_method = "${aws_api_gateway_method.default.http_method}"
status_code = "200"
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = true,
"method.response.header.Access-Control-Allow-Methods" = true,
"method.response.header.Access-Control-Allow-Origin" = true,
}
}
resource "aws_api_gateway_integration" "default" {
rest_api_id = "${var.rest_api_id}"
resource_id = "${var.resource_id}"
http_method = "${aws_api_gateway_method.default.http_method}"
type = "MOCK"
passthrough_behavior = "WHEN_NO_MATCH"
request_templates {
"application/json" = "{ \"statusCode\": 200 }"
}
}
resource "aws_api_gateway_integration_response" "default" {
rest_api_id = "${var.rest_api_id}"
resource_id = "${var.resource_id}"
http_method = "${aws_api_gateway_method.default.http_method}"
status_code = "${aws_api_gateway_method_response.default.status_code}"
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = "'${join(",", var.allow_headers)}'",
"method.response.header.Access-Control-Allow-Methods" = "'${join(",", var.allow_methods)}'",
"method.response.header.Access-Control-Allow-Origin" = "'${var.allow_origin}'",
}
}
My variables are defined as:
variable "allow_headers" {
type = "list"
default = ["Content-Type", "X-Amz-Date", "Authorization", "X-Api-Key", "X-Amz-Security-Token", "X-Requested-With"]
}
variable "allow_methods" {
type = "list"
default = ["*"]
}
variable "allow_origin" {
default = "*"
}
variable "resource_id" {
description = "The API Gateway Resource id."
}
variable "rest_api_id" {
description = "The API Gateway REST API id."
}
When I use the API Gateway web console to test the endpoint, it works as expected:
However, when I try curl the endpoint, I get a 403:
$ curl -is -X OPTIONS https://api.naftuli.wtf/echo.json
HTTP/1.1 403 Forbidden
Content-Type: application/json
Content-Length: 42
Connection: keep-alive
Date: Fri, 23 Feb 2018 20:45:09 GMT
x-amzn-RequestId: 70089d6b-18da-11e8-9042-c3baac8eebde
x-amzn-ErrorType: MissingAuthenticationTokenException
X-Cache: Error from cloudfront
Via: 1.1 5a582ba7fbecfc5948507c13d8d2078a.cloudfront.net (CloudFront)
X-Amz-Cf-Id: VB2j87V6_wfSqXkyIPeqz8vjdDF5vBIi0DsJmIAn8kgyIjSAfkcf7A==
{"message":"Missing Authentication Token"}
The method is clearly configured with authorization = "NONE" and I can trigger it from the API Gateway console without issue.
How can I allow access to this method? I feel like I've done all that I can.

TL;DR After every new resource/method added/changed, you must create a new deployment.
Terraform creates the deployment once and never updates it because none of its data changes. I have found a workaround to this:
resource "aws_api_gateway_stage" "default" {
stage_name = "production"
rest_api_id = "${aws_api_gateway_rest_api.default.id}"
deployment_id = "${aws_api_gateway_deployment.default.id}"
lifecycle {
# a new deployment needs to be created on every resource change so we do it outside of terraform
ignore_changes = ["deployment_id"]
}
}
I tell the stage to ignore the deployment_id property so that Terraform won't show changes where there aren't any.
In order to create a new deployment, I simply added this command to my Makefile deploy target:
deploy:
terraform apply -auto-approve
aws apigateway create-deployment \
--rest-api-id $(terraform output -json | jq -r .rest_api_id.value) \
--stage-name $(terraform output -json | jq -r .stage_name.value)
This creates a new deployment of my REST API for the given stage.
I am sure there are better ways of maybe doing this entirely in Terraform, but they elude me at the moment.

Here's a better way to do it
resource "aws_api_gateway_deployment" "petshop" {
provider = "aws.default"
stage_description = "${md5(file("apigateway.tf"))}"
rest_api_id = "${aws_api_gateway_rest_api.petshop.id}"
stage_name = "prod"
}
This saves you redeploying on every minor change and will only be triggered by changes in the apigateway.tf file

Related

Helm - Kubernetes cluster unreachable: the server has asked for the client to provide credentials

I'm trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {
This error repeats for metrics_server, lb_ingress, argocd, but cluster-autoscaler throws:
Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {
My main.tf looks like this:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
My eks.tf looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
enable_amazon_eks_vpc_cni = true
amazon_eks_vpc_cni_config = {
addon_name = "vpc-cni"
addon_version = "v1.7.5-eksbuild.2"
service_account = "aws-node"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
enable_amazon_eks_kube_proxy = true
amazon_eks_kube_proxy_config = {
addon_name = "kube-proxy"
addon_version = "v1.19.8-eksbuild.1"
service_account = "kube-proxy"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
OP has confirmed in the comment that the problem was resolved:
Of course. I think I found the issue. Doing "kubectl get svc" throws: "An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy"
Solved it by using my actual role, that's crazy. No idea why it was calling itself.
For similar problem look also this issue.
I solved this error by adding dependencies in the helm installations.
The depends_on will wait for the step to successfully complete and then helm module runs.
module "nginx-ingress" {
depends_on = [module.eks, module.aws-load-balancer-controller]
source = "terraform-module/release/helm"
...}
module "aws-load-balancer-controller" {
depends_on = [module.eks]
source = "terraform-module/release/helm"
...}
module "helm_autoscaler" {
depends_on = [module.eks]
source = "terraform-module/release/helm"
...}

Add secret to freshly created Azure AKS using Terraform Kubernetes provider fails

I am creating a kubernetes cluster with the Azure Terraform provider and trying to add a secret to it. The cluster gets created fine but I am getting errors with authenticating to the cluster when creating the secret. I tried 2 different Terraform Kubernetes provider configurations. Here is the main configuration:
variable "client_id" {}
variable "client_secret" {}
resource "azurerm_resource_group" "rg-example" {
name = "rg-example"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "k8s-example" {
name = "k8s-example"
location = azurerm_resource_group.rg-example.location
resource_group_name = azurerm_resource_group.rg-example.name
dns_prefix = "k8s-example"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
role_based_access_control {
enabled = true
}
}
resource "kubernetes_secret" "secret_example" {
metadata {
name = "mysecret"
}
data = {
"something" = "super secret"
}
depends_on = [
azurerm_kubernetes_cluster.k8s-example
]
}
provider "azurerm" {
version = "=2.29.0"
features {}
}
output "host" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
}
output "cluster_username" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
}
output "cluster_password" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
output "client_key" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
Here is the first Kubernetes provider configuration using certificates:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
client_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
client_key = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
cluster_ca_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Failed to configure client: tls: failed to find any PEM data in certificate input
Here is the second Kubernetes provider configuration using HTTP Basic Authorization:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
username = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
password = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Post "https://k8s-example-c4a78c03.hcp.eastus.azmk8s.io:443/api/v1/namespaces/default/secrets": x509: certificate signed by unknown authority
ANALYSIS
I checked the outputs of azurerm_kubernetes_cluster.k8s-example and the data seems valid (username, password, host, etc..) Maybe I need a SSL certificate on my Kubernetes cluster, however I'm am not certain, as I'm new to this. Can someone help me out ?
According to this issue in hashicorp/terraform-provider-kubernetes, you need to use base64decode(). The example that author used:
provider "kubernetes" {
host = "${google_container_cluster.k8sexample.endpoint}"
username = "${var.master_username}"
password = "${var.master_password}"
client_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.cluster_ca_certificate)}"
}
That author said they got the same error as you if they left out the base64decode. You can read more about that function here: https://www.terraform.io/docs/configuration/functions/base64decode.html

Create kubernetes secret for docker registry - Terraform

Using kubectl we can create docker registry authentication secret as follows
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com \
How do i create this secret using terraform, i saw this link, it has data, in the flow of terraform the kubernetes instance is being created in azure and i get the data required from there and i created something like below
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "registry-credentials"
}
data = {
docker-server = data.azurerm_container_registry.docker_registry_data.login_server
docker-username = data.azurerm_container_registry.docker_registry_data.admin_username
docker-password = data.azurerm_container_registry.docker_registry_data.admin_password
}
}
It seems that it is wrong as the images are not being pulled. What am i missing here.
If you run following command
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com
It will create a secret like following
$ kubectl get secrets regsecret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-06-01T18:31:07Z"
name: regsecret
namespace: default
resourceVersion: "42304"
selfLink: /api/v1/namespaces/default/secrets/regsecret
uid: 59054483-2789-4dd2-9321-74d911eef610
type: kubernetes.io/dockerconfigjson
If we decode .dockerconfigjson we will get
{"auths":{"docker.example.com":{"username":"kube","password":"PW_STRING","email":"my#email.com","auth":"a3ViZTpQV19TVFJJTkc="}}}
So, how can we do that using terraform?
I created a file config.json with following data
{"auths":{"${docker-server}":{"username":"${docker-username}","password":"${docker-password}","email":"${docker-email}","auth":"${auth}"}}}
Then in main.tf file
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "regsecret"
}
data = {
".dockerconfigjson" = "${data.template_file.docker_config_script.rendered}"
}
type = "kubernetes.io/dockerconfigjson"
}
data "template_file" "docker_config_script" {
template = "${file("${path.module}/config.json")}"
vars = {
docker-username = "${var.docker-username}"
docker-password = "${var.docker-password}"
docker-server = "${var.docker-server}"
docker-email = "${var.docker-email}"
auth = base64encode("${var.docker-username}:${var.docker-password}")
}
}
then run
$ terraform apply
This will generate same secrets. Hope it will helps
I would suggest creating a azurerm_role_assignement to give aks access to the acr:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = var.service_principal_obj_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update
You can create the service principal in the azure portal or with az cli and use client_id, client_secret and object-id in terraform.
Get Client_id and Object_id by running az ad sp list --filter "displayName eq '<name>'". The secret has to be created in the Certificates & secrets tab of the service principal. See this guide: https://pixelrobots.co.uk/2018/11/first-look-at-terraform-and-the-azure-cloud-shell/
Just set all three as variable, eg for obj_id:
variable "service_principal_obj_id" {
default = "<object-id>"
}
Now use the credentials with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = var.service_principal_app_id
client_secret = var.service_principal_password
}
...
}
And set the object id in the acr as described above.
Alternative
You can create the service principal with terraform (only works if you have the necessary permissions). https://www.terraform.io/docs/providers/azuread/r/service_principal.html combined with a random_password resource:
resource "azuread_application" "aks_sp" {
name = "somename"
available_to_other_tenants = false
oauth2_allow_implicit_flow = false
}
resource "azuread_service_principal" "aks_sp" {
application_id = azuread_application.aks_sp.application_id
depends_on = [
azuread_application.aks_sp
]
}
resource "azuread_service_principal_password" "aks_sp_pwd" {
service_principal_id = azuread_service_principal.aks_sp.id
value = random_password.aks_sp_pwd.result
end_date = "2099-01-01T01:02:03Z"
depends_on = [
azuread_service_principal.aks_sp
]
}
You need to assign the role "Conributer" to the sp and can use it directly in aks / acr.
resource "azurerm_role_assignment" "aks_sp_role_assignment" {
scope = var.subscription_id
role_definition_name = "Contributor"
principal_id = azuread_service_principal.aks_sp.id
depends_on = [
azuread_service_principal_password.aks_sp_pwd
]
}
Use them with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = azuread_service_principal.aks_sp.app_id
client_secret = azuread_service_principal_password.aks_sp_pwd.value
}
...
}
and the role assignment:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azuread_service_principal.aks_sp.object_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update secret example
resource "random_password" "aks_sp_pwd" {
length = 32
special = true
}

Why error, "alias target name does not lie within the target zone" in Terraform aws_route53_record?

With Terraform 0.12, I am creating a static web site in an S3 bucket:
...
resource "aws_s3_bucket" "www" {
bucket = "example.com"
acl = "public-read"
policy = <<-POLICY
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::example.com/*"]
}]
}
POLICY
website {
index_document = "index.html"
error_document = "404.html"
}
tags = {
Environment = var.environment
Terraform = "true"
}
}
resource "aws_route53_zone" "main" {
name = "example.com"
tags = {
Environment = var.environment
Terraform = "true"
}
}
resource "aws_route53_record" "main-ns" {
zone_id = aws_route53_zone.main.zone_id
name = "example.com"
type = "A"
alias {
name = aws_s3_bucket.www.website_endpoint
zone_id = aws_route53_zone.main.zone_id
evaluate_target_health = false
}
}
I get the error:
Error: [ERR]: Error building changeset: InvalidChangeBatch:
[Tried to create an alias that targets example.com.s3-website-us-west-2.amazonaws.com., type A in zone Z1P...9HY, but the alias target name does not lie within the target zone,
Tried to create an alias that targets example.com.s3-website-us-west-2.amazonaws.com., type A in zone Z1P...9HY, but that target was not found]
status code: 400, request id: 35...bc
on main.tf line 132, in resource "aws_route53_record" "main-ns":
132: resource "aws_route53_record" "main-ns" {
What is wrong?
The zone_id within alias is the S3 bucket zone ID, not the Route 53 zone ID. The correct aws_route53_record resource is:
resource "aws_route53_record" "main-ns" {
zone_id = aws_route53_zone.main.zone_id
name = "example.com"
type = "A"
alias {
name = aws_s3_bucket.www.website_endpoint
zone_id = aws_s3_bucket.www.hosted_zone_id # Corrected
evaluate_target_health = false
}
}
Here is an example for CloudFront. The variables are:
base_url = example.com
cloudfront_distribution = "EXXREDACTEDXXX"
domain_names = ["example.com", "www.example.com"]
The Terraform code is:
data "aws_route53_zone" "this" {
name = var.base_url
}
data "aws_cloudfront_distribution" "this" {
id = var.cloudfront_distribution
}
resource "aws_route53_record" "this" {
for_each = toset(var.domain_names)
zone_id = data.aws_route53_zone.this.zone_id
name = each.value
type = "A"
alias {
name = data.aws_cloudfront_distribution.this.domain_name
zone_id = data.aws_cloudfront_distribution.this.hosted_zone_id
evaluate_target_health = false
}
}
Many users specify CloudFront zone_id = "Z2FDTNDATAQYW2" because it's always Z2FDTNDATAQYW2...until some day maybe it isn't. I like to avoid the literal string by computing it using data source aws_cloudfront_distribution.
For anyone like me that came here from Google in hope to find the syntax for the CloudFormation and YML, Here is how you can achieve it for your sub-domains.
Here we add a DNS record into the Route53 and redirect all the subnets of example.com to this ALB:
AlbDnsRecord:
Type: "AWS::Route53::RecordSet"
DependsOn: [ALB_LOGICAL_ID]
Properties:
HostedZoneName: "example.com."
Type: "A"
Name: "*.example.com."
AliasTarget:
DNSName: !GetAtt [ALB_LOGICAL_ID].DNSName
EvaluateTargetHealth: False
HostedZoneId: !GetAtt [ALB_LOGICAL_ID].CanonicalHostedZoneID
Comment: "A record for Stages ALB"
My mistakes was:
not adding . at the end of my HostedZoneName
under AliasTarget.HostedZoneId ID is al uppercase in the end of CanonicalHostedZoneID
replace the [ALB_LOGICAL_ID] with the actual name of your ALB, for me it was like: ALBStages.DNSName
You should have the zone in your Route53.
So for us all the below addresses will come to this ALB:
dev01.example.com
dev01api.example.com
dev02.example.com
dev02api.example.com
qa01.example.com
qa01api.example.com
qa02.example.com
qa02api.example.com
uat.example.com
uatapi.example.com

How do I connect a specific path on an AWS API Gateway to a specific ECS instance via Terraform?

I'm running an application in AWS ECS successfully behind a network load balancer, and able to access it via the internet. I'm trying to put it behind an API Gateway (with the eventual goal of making the VPC private so services can ONLY be accessed via the API Gateway). The thing I'm having trouble with is the best way to specify the service in the aws_api_gateway_integration configuration.
I can access the service successfully via the NLB. I've tried putting the URI into the aws_api_gateway_integration but it doesn't work for access, and probably isn't the best way to specify it since the DNS name will change. I'm unable to find any examples of API Gateway with ECS.
ECS Service Definition:
resource "aws_ecs_service" "cocktail-service-service" {
name = "${var.prefix}-cocktail-service-service"
cluster = "${aws_ecs_cluster.ecs-cluster.id}"
task_definition = "${aws_ecs_task_definition.cocktail-service.family}:${max("${aws_ecs_task_definition.cocktail-service.revision}", "${data.aws_ecs_task_definition.cocktail-service.revision}")}"
desired_count = 1
network_configuration {
security_groups = ["${aws_security_group.ecs-public-sg.id}"]
subnets = ["${aws_subnet.dev-vpc-subnet-1.id}", "${aws_subnet.dev-vpc-subnet-2.id}"]
}
load_balancer {
target_group_arn = "${aws_lb_target_group.ecs-target-group.arn}"
container_port = 8080
container_name = "cocktail-service"
}
}
API Gateway definitions:
resource "aws_api_gateway_resource" "cocktail-service-resource" {
rest_api_id = "${aws_api_gateway_rest_api.dev-api-gateway.id}"
parent_id = "${aws_api_gateway_rest_api.dev-api-gateway.root_resource_id}"
# path_part appears to be how to trigger catching the request
path_part = "cocktails"
}
resource "aws_api_gateway_method" "cocktail-service-method" {
rest_api_id = "${aws_api_gateway_rest_api.dev-api-gateway.id}"
resource_id = "${aws_api_gateway_resource.cocktail-service-resource.id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "cocktail-service-integration" {
rest_api_id = "${aws_api_gateway_rest_api.dev-api-gateway.id}"
resource_id = "${aws_api_gateway_resource.cocktail-service-resource.id}"
http_method = "${aws_api_gateway_method.cocktail-service-method.http_method}"
type = "HTTP"
uri = "http://load-balancer-name-goes-here.elb.us-east-2.amazonaws.com/cocktails"
integration_http_method = "GET"
# TODO: change connection_type to VPC_LINK once the VPC is private
connection_type = "INTERNET"
#connection_id = "${aws_api_gateway_vpc_link.test.id}"
}
When I try to test this through the API Gateway console Test tool, I get a 500 internal server error.