How to create an ECS task in CloudFormation before the CodePipeline is created - aws-cloudformation
I'm trying to define my ECS stack in Cloudformation, including the CI/CD pipeline and ECR repository. However you run into a bit of a conundrum in that:
To create an ECS task definition (AWS::ECS::TaskDefinition) you have to first create a populated ECR repository (AWS::ECR::Repository) so that you can specify the Image property.
To populate this repository you have to first create the CodePipeline (AWS::CodePipeline::Pipeline) which will run automatically on creation.
To create the pipeline you have to first create the ECS task definition / cluster as the pipeline needs to deploy onto it (back to step 1).
The solutions to this I can see are:
Don't create the ECR repository in Cloudformation & pass it as a parameter to the stacks.
Define a dummy image in the task definition to deploy the first time and then create the pipeline which will create the real ECR repository and deploy the real image.
Create the CodeBuild project and ECR repository in a separate stack, trigger the CodeBuild project with a lambda function (I don't think it runs automatically on creation like the pipeline does), create the ECS cluster and then create the pipeline. This seems more complicated than it should be.
Are there any better ways of approaching this problem?
The way I do it is to ECR repository first, but still using CloudFormation. So I have two templates. One for ECR repo. And the second one for the rest. The ECR repo is passed as a parameter to the second template. But you can also export its Uri to be ImportValue in the second step. The Uri is created as follows:
Uri:
Value: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${MyECR}"
You will also need some initial image in the repo for the task definition. This you can automate by having separated CodeBuild project (no need for CodePipeline) for this initial build.
Another way to create this by using a single stack is to trigger the Fargate deployment after pushing the image by making an initial commit to the CodeCommit repository and setting the DesiredCount property of the ECS service to zero:
Repo:
Type: AWS::CodeCommit::Repository
Properties:
Code:
BranchName: main
S3:
Bucket: some-bucket
Key: code.zip
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryDescription: Repository
Triggers:
- Name: Trigger
CustomData: The Code Repository
DestinationArn: !Ref Topic
Branches:
- main
Events: [all]
Service:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
DesiredCount: 0
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref SecG
Subnets: !Ref Subs
ServiceName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
TaskDefinition: !Ref TaskDefinition
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
BuildSpec: !Sub |
version: 0.2
phases:
pre_build:
commands:
- echo "[`date`] PRE_BUILD"
- echo "Logging in to Amazon ECR..."
- aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
- IMAGE_URI="$ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPO:$TAG"
build:
commands:
- echo "[`date`] BUILD"
- echo "Building Docker Image..."
- docker build -t $REPO:$TAG .
- docker tag $REPO:$TAG $IMAGE_URI
post_build:
commands:
- echo "[`date`] POST_BUILD"
- echo "Pushing Docker Image..."
- docker push $IMAGE_URI
- echo Writing image definitions file...
- printf '[{"name":"svc","imageUri":"%s"}]' $IMAGE_URI > $FILE
artifacts:
files: $FILE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:6.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: REGION
Type: PLAINTEXT
Value: !Ref AWS::Region
- Name: ACCOUNT
Type: PLAINTEXT
Value: !Ref AWS::AccountId
- Name: TAG
Type: PLAINTEXT
Value: latest
- Name: REPO
Type: PLAINTEXT
Value: !Ref Registry
- Name: FILE
Type: PLAINTEXT
Value: !Ref ImagesFile
PrivilegedMode: true
Name: !Ref AWS::StackName
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Site
ActionTypeId:
Category: Source
Owner: AWS
Version: '1'
Provider: CodeCommit
Configuration:
RepositoryName: !GetAtt Repo.Name
BranchName: main
PollForSourceChanges: 'false'
InputArtifacts: []
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Docker
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
RunOrder: 1
- Name: Deploy
Actions:
- Name: Fargate
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: ECS
Configuration:
ClusterName: !Ref Cluster
FileName: !Ref ImagesFile
ServiceName: !GetAtt Service.Name
InputArtifacts:
- Name: BuildArtifact
RunOrder: 1
Note that the some-bucket S3 bucket needs to contain the zipped .Dockerfile and any source code without any .git directory included.
If you use another service for your repo, like GitHub for instance, or your already have a repo, simple remove the section and configure the pipeline as required.
The entire CloudFormation stack is listed below for reference:
AWSTemplateFormatVersion: '2010-09-09'
Description: CloudFormation Stack to Trigger CodeBuild via CodePipeline
Parameters:
SecG:
Description: Single security group
Type: AWS::EC2::SecurityGroup::Id
Subs:
Description: Comma separated subnet IDs
Type: List<AWS::EC2::Subnet::Id>
ImagesFile:
Type: String
Default: images.json
Resources:
ArtifactBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
Tags:
- Key: UseWithCodeDeploy
Value: true
CodeBuildServiceRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub 'ssm-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- ssm:GetParameters
- secretsmanager:GetSecretValue
Resource: '*'
- PolicyName: !Sub 'logs-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: '*'
- PolicyName: !Sub 'ecr-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- ecr:BatchCheckLayerAvailability
- ecr:CompleteLayerUpload
- ecr:GetAuthorizationToken
- ecr:InitiateLayerUpload
- ecr:PutImage
- ecr:UploadLayerPart
- lightsail:*
Resource: '*'
- PolicyName: !Sub bkt-${ArtifactBucket}-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- s3:ListBucket
- s3:GetBucketLocation
- s3:ListBucketVersions
- s3:GetBucketVersioning
Resource:
- !Sub arn:aws:s3:::${ArtifactBucket}
- arn:aws:s3:::some-bucket
- PolicyName: !Sub obj-${ArtifactBucket}-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:GetObjectAcl
- s3:PutObjectAcl
- s3:GetObjectTagging
- s3:PutObjectTagging
- s3:GetObjectVersion
- s3:GetObjectVersionAcl
- s3:PutObjectVersionAcl
Resource:
- !Sub arn:aws:s3:::${ArtifactBucket}/*
- arn:aws:s3:::some-bucket/*
CodeDeployServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Sid: '1'
Effect: Allow
Principal:
Service:
- codedeploy.us-east-1.amazonaws.com
- codedeploy.eu-west-1.amazonaws.com
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeDeployRoleForECS
- arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole
- arn:aws:iam::aws:policy/service-role/AWSCodeDeployRoleForLambda
CodeDeployRolePolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: !Sub 'CDPolicy-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Statement:
- Effect: Allow
Resource:
- '*'
Action:
- ec2:Describe*
- Effect: Allow
Resource:
- '*'
Action:
- autoscaling:CompleteLifecycleAction
- autoscaling:DeleteLifecycleHook
- autoscaling:DescribeLifecycleHooks
- autoscaling:DescribeAutoScalingGroups
- autoscaling:PutLifecycleHook
- autoscaling:RecordLifecycleActionHeartbeat
Roles:
- !Ref CodeDeployServiceRole
CodePipelineServiceRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub 'root-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Resource:
- !Sub 'arn:aws:s3:::${ArtifactBucket}/*'
- !Sub 'arn:aws:s3:::${ArtifactBucket}'
Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketAcl
- s3:GetBucketLocation
- Resource: "*"
Effect: Allow
Action:
- ecs:*
- Resource: "*"
Effect: Allow
Action:
- iam:PassRole
Condition:
StringLike:
iam:PassedToService:
- ecs-tasks.amazonaws.com
- Resource: !GetAtt Build.Arn
Effect: Allow
Action:
- codebuild:BatchGetBuilds
- codebuild:StartBuild
- codebuild:BatchGetBuildBatches
- codebuild:StartBuildBatch
- Resource: !GetAtt Repo.Arn
Effect: Allow
Action:
- codecommit:CancelUploadArchive
- codecommit:GetBranch
- codecommit:GetCommit
- codecommit:GetRepository
- codecommit:GetUploadArchiveStatus
- codecommit:UploadArchive
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- CodeCommit Repository State Change
resources:
- !GetAtt Repo.Arn
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- main
Targets:
-
Arn: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-Pipeline
Topic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Endpoint: user#example.com
Protocol: email
TopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Sid: AllowPublish
Effect: Allow
Principal:
Service:
- 'codestar-notifications.amazonaws.com'
Action:
- 'SNS:Publish'
Resource:
- !Ref Topic
Topics:
- !Ref Topic
Repo:
Type: AWS::CodeCommit::Repository
Properties:
Code:
BranchName: main
S3:
Bucket: some-bucket
Key: code.zip
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryDescription: Repository
Triggers:
- Name: Trigger
CustomData: The Code Repository
DestinationArn: !Ref Topic
Branches:
- main
Events: [all]
RepoUser:
Type: AWS::IAM::User
Properties:
Path: '/'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeCommitPowerUser
RepoUserKey:
Type: AWS::IAM::AccessKey
Properties:
UserName:
!Ref RepoUser
Registry:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryPolicyText:
Version: '2012-10-17'
Statement:
- Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- !GetAtt CodeDeployServiceRole.Arn
Action:
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:BatchCheckLayerAvailability
- ecr:PutImage
- ecr:InitiateLayerUpload
- ecr:UploadLayerPart
- ecr:CompleteLayerUpload
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
BuildSpec: !Sub |
version: 0.2
phases:
pre_build:
commands:
- echo "[`date`] PRE_BUILD"
- echo "Logging in to Amazon ECR..."
- aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
- IMAGE_URI="$ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPO:$TAG"
build:
commands:
- echo "[`date`] BUILD"
- echo "Building Docker Image..."
- docker build -t $REPO:$TAG .
- docker tag $REPO:$TAG $IMAGE_URI
post_build:
commands:
- echo "[`date`] POST_BUILD"
- echo "Pushing Docker Image..."
- docker push $IMAGE_URI
- echo Writing image definitions file...
- printf '[{"name":"svc","imageUri":"%s"}]' $IMAGE_URI > $FILE
artifacts:
files: $FILE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:6.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: REGION
Type: PLAINTEXT
Value: !Ref AWS::Region
- Name: ACCOUNT
Type: PLAINTEXT
Value: !Ref AWS::AccountId
- Name: TAG
Type: PLAINTEXT
Value: latest
- Name: REPO
Type: PLAINTEXT
Value: !Ref Registry
- Name: FILE
Type: PLAINTEXT
Value: !Ref ImagesFile
PrivilegedMode: true
Name: !Ref AWS::StackName
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Site
ActionTypeId:
Category: Source
Owner: AWS
Version: '1'
Provider: CodeCommit
Configuration:
RepositoryName: !GetAtt Repo.Name
BranchName: main
PollForSourceChanges: 'false'
InputArtifacts: []
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Docker
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
RunOrder: 1
- Name: Deploy
Actions:
- Name: Fargate
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: ECS
Configuration:
ClusterName: !Ref Cluster
FileName: !Ref ImagesFile
ServiceName: !GetAtt Service.Name
InputArtifacts:
- Name: BuildArtifact
RunOrder: 1
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
FargateTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
TaskRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
-
Name: svc
Image: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Registry}:latest
PortMappings:
- ContainerPort: 8080
Cpu: 256
ExecutionRoleArn: !Ref FargateTaskExecutionRole
Memory: 512
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
RuntimePlatform:
CpuArchitecture: ARM64
OperatingSystemFamily: LINUX
TaskRoleArn: !Ref TaskRole
Service:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
DesiredCount: 0
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref SecG
Subnets: !Ref Subs
ServiceName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
TaskDefinition: !Ref TaskDefinition
Outputs:
ArtifactBucketName:
Description: ArtifactBucket S3 Bucket Name
Value: !Ref ArtifactBucket
ArtifactBucketSecureUrl:
Description: ArtifactBucket S3 Bucket Domain Name
Value: !Sub 'https://${ArtifactBucket.DomainName}'
ClusterName:
Value: !Ref Cluster
ServiceName:
Value: !GetAtt Service.Name
RepoUserAccessKey:
Description: S3 User Access Key
Value: !Ref RepoUserKey
RepoUserSecretKey:
Description: S3 User Secret Key
Value: !GetAtt RepoUserKey.SecretAccessKey
BuildArn:
Description: CodeBuild URL
Value: !GetAtt Build.Arn
RepoArn:
Description: CodeCommit Repository ARN
Value: !GetAtt Repo.Arn
RepoName:
Description: CodeCommit Repository NAme
Value: !GetAtt Repo.Name
RepoCloneUrlHttp:
Description: CodeCommit HTTP Clone URL
Value: !GetAtt Repo.CloneUrlHttp
RepoCloneUrlSsh:
Description: CodeCommit SSH Clone URL
Value: !GetAtt Repo.CloneUrlSsh
PipelineUrl:
Description: CodePipeline URL
Value: !Sub https://console.aws.amazon.com/codepipeline/home?region=${AWS::Region}#/view/${Pipeline}
RegistryUri:
Description: ECR Repository URI
Value: !GetAtt Registry.RepositoryUri
TopicArn:
Description: CodeCommit Notification SNS Topic ARN
Value: !Ref Topic
Hope this helps!
Related
is it possible to create Kubernetes pods, services, replica controllers etc on AWS cloudfromation?
does AWS cloudformation supports creation of Kubernetes pods, services, replica controllers etc or setting up the EKS clusters and worker nodes and using Kubectl to create the resources are the only way?
Not out of the box, but you can if you use a custom resource type backed by a lambda function in CloudFormation. The AWS EKS quickstart has an example: AWSTemplateFormatVersion: "2010-09-09" Description: "deploy an example workload into an existing kubernetes cluster (qs-1p817r5f9)" Parameters: KubeConfigPath: Type: String KubeConfigKmsContext: Type: String Default: "EKSQuickStart" KubeClusterName: Type: String NodeInstanceProfile: Type: String QSS3BucketName: AllowedPattern: ^[0-9a-zA-Z]+([0-9a-zA-Z-]*[0-9a-zA-Z])*$ ConstraintDescription: Quick Start bucket name can include numbers, lowercase letters, uppercase letters, and hyphens (-). It cannot start or end with a hyphen (-). Default: aws-quickstart Description: S3 bucket name for the Quick Start assets. This string can include numbers, lowercase letters, uppercase letters, and hyphens (-). It cannot start or end with a hyphen (-). Type: String QSS3KeyPrefix: AllowedPattern: ^[0-9a-zA-Z-/.]*$ ConstraintDescription: Quick Start key prefix can include numbers, lowercase letters, uppercase letters, hyphens (-), dots(.) and forward slash (/). Default: quickstart-amazon-eks/ Description: S3 key prefix for the Quick Start assets. Quick Start key prefix can include numbers, lowercase letters, uppercase letters, hyphens (-), dots(.) and forward slash (/). Type: String QSS3BucketRegion: Default: 'us-east-1' Description: The AWS Region where the Quick Start S3 bucket (QSS3BucketName) is hosted. When using your own bucket, you must specify this value. Type: String LambdaZipsBucketName: Description: 'OPTIONAL: Bucket Name where the lambda zip files should be placed, if left blank a bucket will be created.' Type: String Default: '' K8sSubnetIds: Type: List<AWS::EC2::Subnet::Id> VPCID: Type: AWS::EC2::VPC::Id ControlPlaneSecurityGroup: Type: AWS::EC2::SecurityGroup::Id Conditions: CreateLambdaZipsBucket: !Equals - !Ref 'LambdaZipsBucketName' - '' UsingDefaultBucket: !Equals [!Ref QSS3BucketName, 'aws-quickstart'] Resources: WorkloadStack: Type: AWS::CloudFormation::Stack Properties: TemplateURL: !Sub - 'https://${S3Bucket}.s3.${S3Region}.${AWS::URLSuffix}/${QSS3KeyPrefix}templates/example-workload.template.yaml' - S3Region: !If [UsingDefaultBucket, !Ref 'AWS::Region', !Ref QSS3BucketRegion] S3Bucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName] Parameters: KubeManifestLambdaArn: !GetAtt KubeManifestLambda.Arn HelmLambdaArn: !GetAtt HelmLambda.Arn KubeConfigPath: !Ref KubeConfigPath KubeConfigKmsContext: !Ref KubeConfigKmsContext KubeClusterName: !Ref KubeClusterName NodeInstanceProfile: !Ref NodeInstanceProfile CopyZips: Type: Custom::CopyZips Properties: ServiceToken: !GetAtt 'CopyZipsFunction.Arn' DestBucket: !Ref LambdaZipsBucketName SourceBucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName] Prefix: !Ref 'QSS3KeyPrefix' Objects: - functions/packages/Helm/lambda.zip - functions/packages/DeleteBucketContents/lambda.zip - functions/packages/KubeManifest/lambda.zip - functions/packages/LambdaEniCleanup/lambda.zip VPCLambdaCleanup: Type: Custom::LambdaCleanup Properties: ServiceToken: !GetAtt VPCLambdaCleanupLambdaFunction.Arn Region: !Ref "AWS::Region" LambdaFunctionNames: - !Ref KubeManifestLambda VPCLambdaCleanupLambdaFunction: DependsOn: CopyZips Type: "AWS::Lambda::Function" Properties: Handler: lambda_function.lambda_handler MemorySize: 128 Role: !GetAtt LambdaCleanUpFunctionRole.Arn Runtime: python3.7 Timeout: 900 Code: S3Bucket: !Ref LambdaZipsBucketName S3Key: !Sub '${QSS3KeyPrefix}functions/packages/LambdaEniCleanup/lambda.zip' HelmLambda: DependsOn: CopyZips Type: AWS::Lambda::Function Properties: Handler: lambda_function.lambda_handler MemorySize: 128 Role: !GetAtt ManifestRole.Arn Runtime: python3.6 Timeout: 900 Code: S3Bucket: !Ref LambdaZipsBucketName S3Key: !Sub '${QSS3KeyPrefix}functions/packages/Helm/lambda.zip' VpcConfig: SecurityGroupIds: [ !Ref EKSLambdaSecurityGroup ] SubnetIds: !Ref K8sSubnetIds KubeManifestLambda: DependsOn: CopyZips Type: AWS::Lambda::Function Properties: Handler: lambda_function.lambda_handler MemorySize: 128 Role: !GetAtt ManifestRole.Arn Runtime: python3.6 Timeout: 900 Code: S3Bucket: !Ref LambdaZipsBucketName S3Key: !Sub '${QSS3KeyPrefix}functions/packages/KubeManifest/lambda.zip' VpcConfig: SecurityGroupIds: [ !Ref EKSLambdaSecurityGroup ] SubnetIds: !Ref K8sSubnetIds DeleteBucketContentsLambda: DependsOn: CopyZips Type: AWS::Lambda::Function Properties: Handler: lambda_function.lambda_handler MemorySize: 128 Role: !GetAtt DeleteBucketContentsRole.Arn Runtime: python3.7 Timeout: 900 Code: S3Bucket: !Ref LambdaZipsBucketName S3Key: !Sub '${QSS3KeyPrefix}functions/packages/DeleteBucketContents/lambda.zip' CopyZipsFunction: Type: AWS::Lambda::Function Properties: Description: Copies objects from a source S3 bucket to a destination Handler: index.handler Runtime: python3.7 Role: !GetAtt CopyZipsRole.Arn Timeout: 900 Code: ZipFile: | import json import logging import threading import boto3 import cfnresponse def copy_objects(source_bucket, dest_bucket, prefix, objects): s3 = boto3.client('s3') for o in objects: key = prefix + o copy_source = { 'Bucket': source_bucket, 'Key': key } print('copy_source: %s' % copy_source) print('dest_bucket = %s'%dest_bucket) print('key = %s' %key) s3.copy_object(CopySource=copy_source, Bucket=dest_bucket, Key=key) def delete_objects(bucket, prefix, objects): s3 = boto3.client('s3') objects = {'Objects': [{'Key': prefix + o} for o in objects]} s3.delete_objects(Bucket=bucket, Delete=objects) def timeout(event, context): logging.error('Execution is about to time out, sending failure response to CloudFormation') cfnresponse.send(event, context, cfnresponse.FAILED, {}, physical_resource_id) def handler(event, context): physical_resource_id = None if "PhysicalResourceId" in event.keys(): physical_resource_id = event["PhysicalResourceId"] # make sure we send a failure to CloudFormation if the function is going to timeout timer = threading.Timer((context.get_remaining_time_in_millis() / 1000.00) - 0.5, timeout, args=[event, context]) timer.start() print('Received event: %s' % json.dumps(event)) status = cfnresponse.SUCCESS try: source_bucket = event['ResourceProperties']['SourceBucket'] dest_bucket = event['ResourceProperties']['DestBucket'] prefix = event['ResourceProperties']['Prefix'] objects = event['ResourceProperties']['Objects'] if event['RequestType'] == 'Delete': delete_objects(dest_bucket, prefix, objects) else: copy_objects(source_bucket, dest_bucket, prefix, objects) except Exception as e: logging.error('Exception: %s' % e, exc_info=True) status = cfnresponse.FAILED finally: timer.cancel() cfnresponse.send(event, context, status, {}, physical_resource_id) LambdaZipsBucket: Type: AWS::S3::Bucket Condition: CreateLambdaZipsBucket LambdaCleanUpFunctionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Action: ['sts:AssumeRole'] Effect: Allow Principal: Service: [lambda.amazonaws.com] Version: '2012-10-17' Path: / Policies: - PolicyName: LambdaRole PolicyDocument: Version: '2012-10-17' Statement: - Action: - 'logs:CreateLogGroup' - 'logs:CreateLogStream' - 'logs:PutLogEvents' Effect: Allow Resource: !Sub "arn:${AWS::Partition}:logs:*:*:*" - Action: - 'ec2:*' Effect: Allow Resource: "*" DeleteBucketContentsRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole ManagedPolicyArns: - !Sub 'arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole' Policies: - PolicyName: deletebucketcontents PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: s3:* Resource: - !Sub 'arn:${AWS::Partition}:s3:::${LambdaZipsBucketName}/*' - !Sub 'arn:${AWS::Partition}:s3:::${LambdaZipsBucketName}' CopyZipsRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole ManagedPolicyArns: - !Su 'arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole' Policies: - PolicyName: lambda-copier PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: s3:GetObject Resource: !Sub - 'arn:${AWS::Partition}:s3:::${S3Bucket}/${QSS3KeyPrefix}*' - S3Bucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName] - Effect: Allow Action: - s3:PutObject - s3:DeleteObject Resource: !Sub 'arn:${AWS::Partition}:s3:::${LambdaZipsBucketName}/${QSS3KeyPrefix}*' ManifestRole: Type: "AWS::IAM::Role" Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: eksStackPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: s3:GetObject Resource: !Sub - "arn:${AWS::Partition}:s3:::${BucketName}/*" - S3Bucket: !If [UsingDefaultBucket, !Sub '${QSS3BucketName}-${AWS::Region}', !Ref QSS3BucketName] - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - ec2:CreateNetworkInterface - ec2:DescribeNetworkInterfaces - ec2:DeleteNetworkInterface Resource: - "*" - Action: "kms:decrypt" Effect: Allow Resource: "*" - Action: "s3:GetObject" Effect: Allow Resource: "*" EKSLambdaSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security group for lambda to communicate with cluster API VpcId: !Ref VPCID ClusterControlPlaneSecurityGroupIngress: Type: AWS::EC2::SecurityGroupIngress Properties: Description: Allow lambda to communicate with the cluster API Server GroupId: !Ref ControlPlaneSecurityGroup SourceSecurityGroupId: !Ref EKSLambdaSecurityGroup IpProtocol: tcp ToPort: 443 FromPort: 443 It works by creating a lambda function customer resource KubeManifestLambda and HelmLambda that has kubectl and helm installed respectively, both configured with a role that allows them to access the EKS k8s cluster. Then these custom resources can be used to deploy k8s manifests and helm charts with custom values, like in this example. KubeManifestExample: Type: "Custom::KubeManifest" Version: '1.0' Properties: # The lambda function that executes the manifest against the cluster. This is created in one of the parent stacks ServiceToken: !Ref KubeManifestLambdaArn # S3 path to the encrypted config file eg. s3://my-bucket/kube/config.encrypted KubeConfigPath: !Ref KubeConfigPath # context for KMS to use when decrypting the file KubeConfigKmsContext: !Ref KubeConfigKmsContext # Kubernetes manifest Manifest: apiVersion: v1 kind: ConfigMap metadata: # If name is not specified it will be automatically generated, # and can be retrieved with !GetAtt LogicalID.name # # name: test # # if namespace is not specified, "default" namespace will be used namespace: kube-system data: # examples of consuming outputs of the HelmExample resource below's output. Creates an implicit dependency, so # this resource will only launch once the HelmExample resource has completed successfully ServiceCatalogReleaseName: !Ref HelmExample ServiceCatalogKubernetesServiceName: !GetAtt HelmExample.Service0 This even lets you reference other Cloud formation resources such as RDS instances that are created as part of a workload.
You can use CloudFormation to create EKS cluster and worker nodes but you have to use kubectl for any operation on cluster like creating service, pods, deployments etc.....you can’t use CloudFormation for that
If you use CDK, you can use cluster.add_helm_chart() or HelmChart class. It will create a lambda behind the scenes. Or you can create a lambda directly with https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.lambda_layer_kubectl/README.html
Set CodeBuild env var from previous step in codepipeline
I have the following Codepipeline (cloudformation template snippet). My BuildDeployWebappTest step needs the name of an s3 bucket created in the previous ExecuteChangeSet step. As you can see BuildDeployWebappTest step uses the WebappCodeBuildProjectTestStage CodeBuild project config. The WEBAPP_S3_BUCKET env var is what needs to be set to the value of the bucket name created in the ExecuteChangeSet step. How can I make this happen? Currently I have to create S3 bucket outside of this pipeline and "hard code" the S3 bucket name as a CloudFormation parameter (TestStageS3Bucket) to the pipeline below. The ExecuteChangeSet produces a CloudFormation output named WebAppBucket. I can export it if needed. I know CodeBuild supports multiple InputArtifacts, but I don't know how I'd reference a CloudFormation OutputArtifact in the CodeBuild project config OR within the CodeBuild buildspec.yml file itself. ... WebappCodeBuildProjectTestStage: Type: AWS::CodeBuild::Project Properties: Artifacts: Type: CODEPIPELINE Environment: ComputeType: BUILD_GENERAL1_SMALL PrivilegedMode: false Type: LINUX_CONTAINER Image: !Ref CodeBuildImage EnvironmentVariables: - Name: WEBAPP_S3_BUCKET Value: !Ref TestStageS3Bucket - Name: APP_STAGE Value: test - Name: ANGULAR_BUILD Value: test ServiceRole: !Ref CodeBuildRole Source: Type: CODEPIPELINE BuildSpec: !Ref WebappBuildspecPath TimeoutInMinutes: !Ref BuildTimeout ... Pipeline: Type: AWS::CodePipeline::Pipeline Properties: ArtifactStore: Location: !Ref 'ArtifactStoreBucket' Type: S3 DisableInboundStageTransitions: [] Name: !Ref AWS::StackName RoleArn: !GetAtt PipelineRole.Arn Stages: - Name: Source Actions: - Name: Source ActionTypeId: Category: Source Owner: ThirdParty Provider: GitHub Version: '1' OutputArtifacts: - Name: MyAppCode Configuration: Owner: !Ref GithubOrg Repo: !Select [ 0, !Split [ '--', !Ref 'AWS::StackName' ] ] PollForSourceChanges: false Branch: !Select [ 1, !Split [ '--', !Ref 'AWS::StackName' ] ] OAuthToken: !Ref GithubOAuthToken RunOrder: 1 - Name: DeployTestResources Actions: - Name: CreateChangeSet ActionTypeId: Category: Deploy Owner: AWS Provider: CloudFormation Version: '1' InputArtifacts: - Name: MyAppCode Configuration: ActionMode: CHANGE_SET_REPLACE RoleArn: !GetAtt CFNRole.Arn Capabilities: CAPABILITY_IAM StackName: !Sub "${AWS::StackName}--test--gen" ChangeSetName: !Sub "${AWS::StackName}--test--changeset" TemplatePath: MyAppCode::aws/cloudformation/template.yml TemplateConfiguration: !Sub "MyAppCode::${TestCloudFormationTemplateParameters}" RunOrder: 1 - Name: ExecuteChangeSet ActionTypeId: Category: Deploy Owner: AWS Provider: CloudFormation Version: '1' Configuration: ActionMode: CHANGE_SET_EXECUTE RoleArn: !GetAtt CFNRole.Arn StackName: !Sub "${AWS::StackName}--test--gen" ChangeSetName: !Sub "${AWS::StackName}--test--changeset" OutputFileName: TestOutput.json OutputArtifacts: - Name: DeployTestResourcesOutput RunOrder: 2 - Name: BuildDeployWebappTest Actions: - Name: CodeBuild InputArtifacts: - Name: MyAppCode ActionTypeId: Category: Build Owner: AWS Provider: CodeBuild Version: '1' Configuration: ProjectName: !Ref WebappCodeBuildProjectTestStage RunOrder: 1
AWS CloudFormation CodePipeline: Could not fetch the contents of the repository from GitHub
I'm attempting to setup an AWS CloudFormation configuration using CodePipeline and GitHub. I've failed both at my own example project and the tutorial: Create a GitHub Pipeline with AWS CloudFormation. All resources are created, but in CodePipeline I continuously get the following error during the initial "Source" stage. Could not fetch the contents of the repository from GitHub. See image below: Note that this is not a permissions error. It's something else that does not exist on Google until now. GitHub can be configured to work if I stop using CloudFormation and create a CodePipeline through the console, but for my purposes, I need to use CloudFormation. Need to stick to a template. Here is the template from the CloudFormation template copied from the tutorial: Parameters: BranchName: Description: GitHub branch name Type: String Default: master RepositoryName: Description: GitHub repository name Type: String Default: test GitHubOwner: Type: String GitHubSecret: Type: String NoEcho: true GitHubOAuthToken: Type: String NoEcho: true ApplicationName: Description: CodeDeploy application name Type: String Default: DemoApplication BetaFleet: Description: Fleet configured in CodeDeploy Type: String Default: DemoFleet Resources: CodePipelineArtifactStoreBucket: Type: "AWS::S3::Bucket" CodePipelineArtifactStoreBucketPolicy: Type: "AWS::S3::BucketPolicy" Properties: Bucket: !Ref CodePipelineArtifactStoreBucket PolicyDocument: Version: 2012-10-17 Statement: - Sid: DenyUnEncryptedObjectUploads Effect: Deny Principal: "*" Action: "s3:PutObject" Resource: !Join - "" - - !GetAtt - CodePipelineArtifactStoreBucket - Arn - /* Condition: StringNotEquals: "s3:x-amz-server-side-encryption": "aws:kms" - Sid: DenyInsecureConnections Effect: Deny Principal: "*" Action: "s3:*" Resource: !Join - "" - - !GetAtt - CodePipelineArtifactStoreBucket - Arn - /* Condition: Bool: "aws:SecureTransport": false AppPipelineWebhook: Type: "AWS::CodePipeline::Webhook" Properties: Authentication: GITHUB_HMAC AuthenticationConfiguration: SecretToken: !Ref GitHubSecret Filters: - JsonPath: $.ref MatchEquals: "refs/heads/{Branch}" TargetPipeline: !Ref AppPipeline TargetAction: SourceAction Name: AppPipelineWebhook TargetPipelineVersion: !GetAtt - AppPipeline - Version RegisterWithThirdParty: true AppPipeline: Type: "AWS::CodePipeline::Pipeline" Properties: Name: github-events-pipeline RoleArn: !GetAtt - CodePipelineServiceRole - Arn Stages: - Name: Source Actions: - Name: SourceAction ActionTypeId: Category: Source Owner: ThirdParty Version: 1 Provider: GitHub OutputArtifacts: - Name: SourceOutput Configuration: Owner: !Ref GitHubOwner Repo: !Ref RepositoryName Branch: !Ref BranchName OAuthToken: !Ref GitHubOAuthToken PollForSourceChanges: false RunOrder: 1 - Name: Beta Actions: - Name: BetaAction InputArtifacts: - Name: SourceOutput ActionTypeId: Category: Deploy Owner: AWS Version: 1 Provider: CodeDeploy Configuration: ApplicationName: !Ref ApplicationName DeploymentGroupName: !Ref BetaFleet RunOrder: 1 ArtifactStore: Type: S3 Location: !Ref CodePipelineArtifactStoreBucket CodePipelineServiceRole: Type: "AWS::IAM::Role" Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - codepipeline.amazonaws.com Action: "sts:AssumeRole" Path: / Policies: - PolicyName: AWS-CodePipeline-Service-3 PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - "codecommit:CancelUploadArchive" - "codecommit:GetBranch" - "codecommit:GetCommit" - "codecommit:GetUploadArchiveStatus" - "codecommit:UploadArchive" Resource: "*" - Effect: Allow Action: - "codedeploy:CreateDeployment" - "codedeploy:GetApplicationRevision" - "codedeploy:GetDeployment" - "codedeploy:GetDeploymentConfig" - "codedeploy:RegisterApplicationRevision" Resource: "*" - Effect: Allow Action: - "codebuild:BatchGetBuilds" - "codebuild:StartBuild" Resource: "*" - Effect: Allow Action: - "devicefarm:ListProjects" - "devicefarm:ListDevicePools" - "devicefarm:GetRun" - "devicefarm:GetUpload" - "devicefarm:CreateUpload" - "devicefarm:ScheduleRun" Resource: "*" - Effect: Allow Action: - "lambda:InvokeFunction" - "lambda:ListFunctions" Resource: "*" - Effect: Allow Action: - "iam:PassRole" Resource: "*" - Effect: Allow Action: - "elasticbeanstalk:*" - "ec2:*" - "elasticloadbalancing:*" - "autoscaling:*" - "cloudwatch:*" - "s3:*" - "sns:*" - "cloudformation:*" - "rds:*" - "sqs:*" - "ecs:*" Resource: "*" I've have taken the following steps: provided Github Organization, Repo & Branch setup a personal access token on GitHub and supplied it to the template GitHubOAuthToken parameter with access to repo:all & admin:repo_hook setup a random string and provided it to the GitHubSecret tried not including a GitHubSecret as in many other examples verified that AWS CodePipeline for my region is listed in Github Applications under "Authorized OAuth Applications" In attempts to start from a clear slate, I've also done the following: cleared all GitHub webhooks before starting aws codepipeline list-webhooks & aws codepipeline delete-webhook --name added a new personal access token tried multiple repos & branches Any ideas how I can get GitHub to work with CloudFormation & CodePipeline?
Found the solution. The Github organization name is case sensitive.
Setting up CodePipeline template to deploy CloudFormation stack from CodeCommit
From a CloudFormation template, you can deploy CodeCommit and CodePipeline. From this announcement, You can now choose AWS CloudFormation as a deployment action in your release workflows built using AWS CodePipeline. I've worked out most of the Cloudformation Template, but I can't figure out the stages. Resources: PipelineRepo: Type: AWS::CodeCommit::Repository Properties: RepositoryName: pipeline RepositoryDescription: Pipeline setup repo PipelineArtifacts: Type: AWS::S3::Bucket PipelineRole: Type: AWS::IAM::Role Pipeline: Type: AWS::CodePipeline::Pipeline Properties: Name: pipeline-pipeline ArtifactStore: Type: S3 Location: Ref: PipelineArtifacts RoleArn: Ref: PipelineRole Stages: ... STAGES ... How do you set up the stages to track CodeCommit and then deploy a CloudFormation template from a file in the repo?
Offical Documentation: The IAM Role is broken too. Below is a functioning stack. For various types of CF deployments, see the CF Configuration Properties. A helpful sample CF stack is here. Resources: PipelineRepo: Type: AWS::CodeCommit::Repository Properties: RepositoryName: pipeline RepositoryDescription: Pipeline setup repo PipelineArtifacts: Type: AWS::S3::Bucket PipelineRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - codepipeline.amazonaws.com - cloudformation.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: CloudPipelinePolicy PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: "cloudformation:*" Resource: "*" - Effect: Allow Action: "codecommit:*" Resource: "*" - Effect: Allow Action: "s3:*" Resource: "*" - Effect: Allow Action: - iam:PassRole Resource: "*" Pipeline: Type: AWS::CodePipeline::Pipeline Properties: Name: pipeline-pipeline ArtifactStore: Type: S3 Location: Ref: PipelineArtifacts RoleArn: !GetAtt [PipelineRole, Arn] Stages: - Name: Source Actions: - Name: CheckoutSourceTemplate ActionTypeId: Category: Source Owner: AWS Version: 1 Provider: CodeCommit Configuration: PollForSourceChanges: True RepositoryName: !GetAtt [PipelineRepo, Name] BranchName: master OutputArtifacts: - Name: TemplateSource RunOrder: 1 - Name: Deploy Actions: - Name: CreateStack ActionTypeId: Category: Deploy Owner: AWS Provider: CloudFormation Version: 1 InputArtifacts: - Name: TemplateSource Configuration: ActionMode: CREATE_UPDATE RoleArn: !GetAtt [PipelineRole, Arn] StackName: pipeline Capabilities: CAPABILITY_IAM TemplatePath: TemplateSource::template.yml RunOrder: 1
How to make a list item conditional in Cloud Formation template?
I have the following cloud formation template that creates a code pipeline. The pipeline has three stages: Stages: - Name: "Source" Actions: - Name: "Source" ActionTypeId: Category: "Source" Owner: "ThirdParty" Version: "1" Provider: "GitHub" OutputArtifacts: - Name: "MyApp" Configuration: Owner: !Ref GithubOwner Repo: !Ref GithubRepo PollForSourceChanges: "true" Branch: !Ref GithubBranch OAuthToken: !Ref GithubTokenParameter RunOrder: 1 - Name: "Run-Unit-Tests" Actions: - InputArtifacts: - Name: "MyApp" Name: "UnitTests" ActionTypeId: Category: "Test" Owner: "AWS" Version: "1" Provider: "CodeBuild" OutputArtifacts: - Name: "MyTests" Configuration: ProjectName: !Ref CodeBuildName RunOrder: 1 - Name: "Deploy-Staging" Actions: - InputArtifacts: - Name: "MyApp" Name: "Deploy-Staging" ActionTypeId: Category: "Deploy" Owner: "AWS" Version: "1" Provider: "ElasticBeanstalk" Configuration: ApplicationName: !Ref BeanstalkApplicationName EnvironmentName: !Ref BeanstalkEnvironmentStaging RunOrder: 1 I also have a condition: IncludeStagingEnv: !Equals [Staging, !Ref CodePipelineEnvironment] When the condition is false, I would like to omit the 3rd item in the Code Pipeline stages list. I tried using !If with AWS::NoValue, but NoValue is not a valid list item: Stages: - !IF - IncludeStagingEnv - Name: "Deploy-Staging" Actions: - InputArtifacts: - Name: "MyApp" Name: "Deploy-Staging" ActionTypeId: Category: "Deploy" Owner: "AWS" Version: "1" Provider: "ElasticBeanstalk" Configuration: ApplicationName: !Ref BeanstalkApplicationName EnvironmentName: !Ref BeanstalkEnvironmentStaging RunOrder: 1 - AWS::NoValue How can I omit the last item when IncludeStagingEnv==false?
Same problem occurs on my template for a Cloudfront distribution. The solution was to use AWS::NoValue with the Ref attribute. ... LambdaFunctionAssociations: Fn::If: - Authentication - - EventType: "viewer-request" LambdaFunctionARN: "arn:aws:lambda:us-east-1:..." - - Ref: "AWS::NoValue" ... If this work for all resources same, you should change your conditional part to: Stages: - !If - IncludeStagingEnv - - Name: "Deploy-Staging" Actions: - InputArtifacts: ... - - Ref: "AWS::NoValue" Hope this helps!
#Fabi755's answer put me on the right path thank you! I was fighting with the same LambdaFunctionAssociations challenge. I settled on a slightly different, slightly better approach as follows. I think it is better in that it works for multiple optional list items. LambdaFunctionAssociations: - !If - HasOriginResponseFunctionArn - EventType: origin-response LambdaFunctionARN: !Ref OriginResponseFunctionArn - !Ref AWS::NoValue - !If - HasViewerRequestFunctionArn - EventType: viewer-request LambdaFunctionARN: !Ref ViewerRequestFunctionArn - !Ref AWS::NoValue