Setting up CodePipeline template to deploy CloudFormation stack from CodeCommit - aws-cloudformation

From a CloudFormation template, you can deploy CodeCommit and CodePipeline. From this announcement,
You can now choose AWS CloudFormation as a deployment action in your release workflows built using AWS CodePipeline.
I've worked out most of the Cloudformation Template, but I can't figure out the stages.
Resources:
PipelineRepo:
Type: AWS::CodeCommit::Repository
Properties:
RepositoryName: pipeline
RepositoryDescription: Pipeline setup repo
PipelineArtifacts:
Type: AWS::S3::Bucket
PipelineRole:
Type: AWS::IAM::Role
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: pipeline-pipeline
ArtifactStore:
Type: S3
Location:
Ref: PipelineArtifacts
RoleArn:
Ref: PipelineRole
Stages:
... STAGES ...
How do you set up the stages to track CodeCommit and then deploy a CloudFormation template from a file in the repo?

Offical Documentation:
The IAM Role is broken too. Below is a functioning stack. For various types of CF deployments, see the CF Configuration Properties. A helpful sample CF stack is here.
Resources:
PipelineRepo:
Type: AWS::CodeCommit::Repository
Properties:
RepositoryName: pipeline
RepositoryDescription: Pipeline setup repo
PipelineArtifacts:
Type: AWS::S3::Bucket
PipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
- cloudformation.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: CloudPipelinePolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: "cloudformation:*"
Resource: "*"
- Effect: Allow
Action: "codecommit:*"
Resource: "*"
- Effect: Allow
Action: "s3:*"
Resource: "*"
- Effect: Allow
Action:
- iam:PassRole
Resource: "*"
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: pipeline-pipeline
ArtifactStore:
Type: S3
Location:
Ref: PipelineArtifacts
RoleArn: !GetAtt [PipelineRole, Arn]
Stages:
-
Name: Source
Actions:
-
Name: CheckoutSourceTemplate
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
Configuration:
PollForSourceChanges: True
RepositoryName: !GetAtt [PipelineRepo, Name]
BranchName: master
OutputArtifacts:
- Name: TemplateSource
RunOrder: 1
-
Name: Deploy
Actions:
-
Name: CreateStack
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: 1
InputArtifacts:
- Name: TemplateSource
Configuration:
ActionMode: CREATE_UPDATE
RoleArn: !GetAtt [PipelineRole, Arn]
StackName: pipeline
Capabilities: CAPABILITY_IAM
TemplatePath: TemplateSource::template.yml
RunOrder: 1

Related

how to pass bucket/key name to fargate job via a cloudwatch event trigger on s3 object creation event?

I have create a fargate task and trying to trigger it via s3 object creation event ( see sample below) via cloudformation.as it cannot trigger it directly, i have created a cloudwatchevent. I am trying to pass the bucket and obj name to my fargate task code . doing some research, i came across -> InputTransformer, but i'm not sure how to pass the value of my bucket and key name and how to read it in my python code. any help will be appreciated.
AWSTemplateFormatVersion: 2010-09-09
Description: An example CloudFormation template for Fargate.
Parameters:
VPC:
Type: AWS::EC2::VPC::Id
SubnetA:
Type: AWS::EC2::Subnet::Id
SubnetB:
Type: AWS::EC2::Subnet::Id
Image:
Type: String
Default: 123456789012.dkr.ecr.region.amazonaws.com/image:tag
Resources:
mybucket:
Properties:
BucketName: 'mytestbucket-us'
cloudwatchEvent:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail:
eventSource:
- s3.amazonaws.com
eventName:
- PutObject
- CompleteMultipartUpload
requestParameters:
bucketName:
- !Ref mybucket
Targets:
- Id: my-fargate-task
Arn: myclusterArn
RoleArn: myinvocationrolearn
Input:
'Fn::Sub':
- >-
{"containerOverrides": [{"name":"somecontainer"]}
EcsParameters:
TaskDefinition:
LaunchType: 'FARGATE'
...
NetworkConfiguration:
...
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Join ['', [!Ref ServiceName, Cluster]]
TaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: LogGroup
Properties:
Family: !Join ['', [!Ref ServiceName, TaskDefinition]]
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Cpu: 256
Memory: 2GB
ExecutionRoleArn: !Ref ExecutionRole
TaskRoleArn: !Ref TaskRole
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref Image
# A role needed by ECS
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['', [!Ref ServiceName, ExecutionRole]]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
# A role for the containers
TaskRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['', [!Ref ServiceName, TaskRole]]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
You would use a CloudWatch Event Input Transformer to extract the data you need from the event, and pass that data to the ECS task as environment variable(s) in the target's ContainerOverrides. I don't use CloudFormation, but here's an example using Terraform.
You can't. CloudWatch events do not pass data to ECS jobs. You need to develop your own mechanism for that. For example, trigger lambda first, store event in S3 Parameter Store or DynamoDB, and then invoke your ECS job which will get stored data.

How to create an ECS task in CloudFormation before the CodePipeline is created

I'm trying to define my ECS stack in Cloudformation, including the CI/CD pipeline and ECR repository. However you run into a bit of a conundrum in that:
To create an ECS task definition (AWS::ECS::TaskDefinition) you have to first create a populated ECR repository (AWS::ECR::Repository) so that you can specify the Image property.
To populate this repository you have to first create the CodePipeline (AWS::CodePipeline::Pipeline) which will run automatically on creation.
To create the pipeline you have to first create the ECS task definition / cluster as the pipeline needs to deploy onto it (back to step 1).
The solutions to this I can see are:
Don't create the ECR repository in Cloudformation & pass it as a parameter to the stacks.
Define a dummy image in the task definition to deploy the first time and then create the pipeline which will create the real ECR repository and deploy the real image.
Create the CodeBuild project and ECR repository in a separate stack, trigger the CodeBuild project with a lambda function (I don't think it runs automatically on creation like the pipeline does), create the ECS cluster and then create the pipeline. This seems more complicated than it should be.
Are there any better ways of approaching this problem?
The way I do it is to ECR repository first, but still using CloudFormation. So I have two templates. One for ECR repo. And the second one for the rest. The ECR repo is passed as a parameter to the second template. But you can also export its Uri to be ImportValue in the second step. The Uri is created as follows:
Uri:
Value: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${MyECR}"
You will also need some initial image in the repo for the task definition. This you can automate by having separated CodeBuild project (no need for CodePipeline) for this initial build.
Another way to create this by using a single stack is to trigger the Fargate deployment after pushing the image by making an initial commit to the CodeCommit repository and setting the DesiredCount property of the ECS service to zero:
Repo:
Type: AWS::CodeCommit::Repository
Properties:
Code:
BranchName: main
S3:
Bucket: some-bucket
Key: code.zip
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryDescription: Repository
Triggers:
- Name: Trigger
CustomData: The Code Repository
DestinationArn: !Ref Topic
Branches:
- main
Events: [all]
Service:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
DesiredCount: 0
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref SecG
Subnets: !Ref Subs
ServiceName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
TaskDefinition: !Ref TaskDefinition
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
BuildSpec: !Sub |
version: 0.2
phases:
pre_build:
commands:
- echo "[`date`] PRE_BUILD"
- echo "Logging in to Amazon ECR..."
- aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
- IMAGE_URI="$ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPO:$TAG"
build:
commands:
- echo "[`date`] BUILD"
- echo "Building Docker Image..."
- docker build -t $REPO:$TAG .
- docker tag $REPO:$TAG $IMAGE_URI
post_build:
commands:
- echo "[`date`] POST_BUILD"
- echo "Pushing Docker Image..."
- docker push $IMAGE_URI
- echo Writing image definitions file...
- printf '[{"name":"svc","imageUri":"%s"}]' $IMAGE_URI > $FILE
artifacts:
files: $FILE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:6.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: REGION
Type: PLAINTEXT
Value: !Ref AWS::Region
- Name: ACCOUNT
Type: PLAINTEXT
Value: !Ref AWS::AccountId
- Name: TAG
Type: PLAINTEXT
Value: latest
- Name: REPO
Type: PLAINTEXT
Value: !Ref Registry
- Name: FILE
Type: PLAINTEXT
Value: !Ref ImagesFile
PrivilegedMode: true
Name: !Ref AWS::StackName
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Site
ActionTypeId:
Category: Source
Owner: AWS
Version: '1'
Provider: CodeCommit
Configuration:
RepositoryName: !GetAtt Repo.Name
BranchName: main
PollForSourceChanges: 'false'
InputArtifacts: []
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Docker
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
RunOrder: 1
- Name: Deploy
Actions:
- Name: Fargate
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: ECS
Configuration:
ClusterName: !Ref Cluster
FileName: !Ref ImagesFile
ServiceName: !GetAtt Service.Name
InputArtifacts:
- Name: BuildArtifact
RunOrder: 1
Note that the some-bucket S3 bucket needs to contain the zipped .Dockerfile and any source code without any .git directory included.
If you use another service for your repo, like GitHub for instance, or your already have a repo, simple remove the section and configure the pipeline as required.
The entire CloudFormation stack is listed below for reference:
AWSTemplateFormatVersion: '2010-09-09'
Description: CloudFormation Stack to Trigger CodeBuild via CodePipeline
Parameters:
SecG:
Description: Single security group
Type: AWS::EC2::SecurityGroup::Id
Subs:
Description: Comma separated subnet IDs
Type: List<AWS::EC2::Subnet::Id>
ImagesFile:
Type: String
Default: images.json
Resources:
ArtifactBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
Tags:
- Key: UseWithCodeDeploy
Value: true
CodeBuildServiceRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub 'ssm-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- ssm:GetParameters
- secretsmanager:GetSecretValue
Resource: '*'
- PolicyName: !Sub 'logs-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: '*'
- PolicyName: !Sub 'ecr-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- ecr:BatchCheckLayerAvailability
- ecr:CompleteLayerUpload
- ecr:GetAuthorizationToken
- ecr:InitiateLayerUpload
- ecr:PutImage
- ecr:UploadLayerPart
- lightsail:*
Resource: '*'
- PolicyName: !Sub bkt-${ArtifactBucket}-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- s3:ListBucket
- s3:GetBucketLocation
- s3:ListBucketVersions
- s3:GetBucketVersioning
Resource:
- !Sub arn:aws:s3:::${ArtifactBucket}
- arn:aws:s3:::some-bucket
- PolicyName: !Sub obj-${ArtifactBucket}-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:GetObjectAcl
- s3:PutObjectAcl
- s3:GetObjectTagging
- s3:PutObjectTagging
- s3:GetObjectVersion
- s3:GetObjectVersionAcl
- s3:PutObjectVersionAcl
Resource:
- !Sub arn:aws:s3:::${ArtifactBucket}/*
- arn:aws:s3:::some-bucket/*
CodeDeployServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Sid: '1'
Effect: Allow
Principal:
Service:
- codedeploy.us-east-1.amazonaws.com
- codedeploy.eu-west-1.amazonaws.com
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeDeployRoleForECS
- arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole
- arn:aws:iam::aws:policy/service-role/AWSCodeDeployRoleForLambda
CodeDeployRolePolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: !Sub 'CDPolicy-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Statement:
- Effect: Allow
Resource:
- '*'
Action:
- ec2:Describe*
- Effect: Allow
Resource:
- '*'
Action:
- autoscaling:CompleteLifecycleAction
- autoscaling:DeleteLifecycleHook
- autoscaling:DescribeLifecycleHooks
- autoscaling:DescribeAutoScalingGroups
- autoscaling:PutLifecycleHook
- autoscaling:RecordLifecycleActionHeartbeat
Roles:
- !Ref CodeDeployServiceRole
CodePipelineServiceRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub 'root-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Resource:
- !Sub 'arn:aws:s3:::${ArtifactBucket}/*'
- !Sub 'arn:aws:s3:::${ArtifactBucket}'
Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketAcl
- s3:GetBucketLocation
- Resource: "*"
Effect: Allow
Action:
- ecs:*
- Resource: "*"
Effect: Allow
Action:
- iam:PassRole
Condition:
StringLike:
iam:PassedToService:
- ecs-tasks.amazonaws.com
- Resource: !GetAtt Build.Arn
Effect: Allow
Action:
- codebuild:BatchGetBuilds
- codebuild:StartBuild
- codebuild:BatchGetBuildBatches
- codebuild:StartBuildBatch
- Resource: !GetAtt Repo.Arn
Effect: Allow
Action:
- codecommit:CancelUploadArchive
- codecommit:GetBranch
- codecommit:GetCommit
- codecommit:GetRepository
- codecommit:GetUploadArchiveStatus
- codecommit:UploadArchive
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- CodeCommit Repository State Change
resources:
- !GetAtt Repo.Arn
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- main
Targets:
-
Arn: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-Pipeline
Topic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Endpoint: user#example.com
Protocol: email
TopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Sid: AllowPublish
Effect: Allow
Principal:
Service:
- 'codestar-notifications.amazonaws.com'
Action:
- 'SNS:Publish'
Resource:
- !Ref Topic
Topics:
- !Ref Topic
Repo:
Type: AWS::CodeCommit::Repository
Properties:
Code:
BranchName: main
S3:
Bucket: some-bucket
Key: code.zip
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryDescription: Repository
Triggers:
- Name: Trigger
CustomData: The Code Repository
DestinationArn: !Ref Topic
Branches:
- main
Events: [all]
RepoUser:
Type: AWS::IAM::User
Properties:
Path: '/'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeCommitPowerUser
RepoUserKey:
Type: AWS::IAM::AccessKey
Properties:
UserName:
!Ref RepoUser
Registry:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryPolicyText:
Version: '2012-10-17'
Statement:
- Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- !GetAtt CodeDeployServiceRole.Arn
Action:
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:BatchCheckLayerAvailability
- ecr:PutImage
- ecr:InitiateLayerUpload
- ecr:UploadLayerPart
- ecr:CompleteLayerUpload
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
BuildSpec: !Sub |
version: 0.2
phases:
pre_build:
commands:
- echo "[`date`] PRE_BUILD"
- echo "Logging in to Amazon ECR..."
- aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
- IMAGE_URI="$ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPO:$TAG"
build:
commands:
- echo "[`date`] BUILD"
- echo "Building Docker Image..."
- docker build -t $REPO:$TAG .
- docker tag $REPO:$TAG $IMAGE_URI
post_build:
commands:
- echo "[`date`] POST_BUILD"
- echo "Pushing Docker Image..."
- docker push $IMAGE_URI
- echo Writing image definitions file...
- printf '[{"name":"svc","imageUri":"%s"}]' $IMAGE_URI > $FILE
artifacts:
files: $FILE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:6.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: REGION
Type: PLAINTEXT
Value: !Ref AWS::Region
- Name: ACCOUNT
Type: PLAINTEXT
Value: !Ref AWS::AccountId
- Name: TAG
Type: PLAINTEXT
Value: latest
- Name: REPO
Type: PLAINTEXT
Value: !Ref Registry
- Name: FILE
Type: PLAINTEXT
Value: !Ref ImagesFile
PrivilegedMode: true
Name: !Ref AWS::StackName
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Site
ActionTypeId:
Category: Source
Owner: AWS
Version: '1'
Provider: CodeCommit
Configuration:
RepositoryName: !GetAtt Repo.Name
BranchName: main
PollForSourceChanges: 'false'
InputArtifacts: []
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Docker
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
RunOrder: 1
- Name: Deploy
Actions:
- Name: Fargate
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: ECS
Configuration:
ClusterName: !Ref Cluster
FileName: !Ref ImagesFile
ServiceName: !GetAtt Service.Name
InputArtifacts:
- Name: BuildArtifact
RunOrder: 1
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
FargateTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
TaskRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
-
Name: svc
Image: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Registry}:latest
PortMappings:
- ContainerPort: 8080
Cpu: 256
ExecutionRoleArn: !Ref FargateTaskExecutionRole
Memory: 512
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
RuntimePlatform:
CpuArchitecture: ARM64
OperatingSystemFamily: LINUX
TaskRoleArn: !Ref TaskRole
Service:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
DesiredCount: 0
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref SecG
Subnets: !Ref Subs
ServiceName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
TaskDefinition: !Ref TaskDefinition
Outputs:
ArtifactBucketName:
Description: ArtifactBucket S3 Bucket Name
Value: !Ref ArtifactBucket
ArtifactBucketSecureUrl:
Description: ArtifactBucket S3 Bucket Domain Name
Value: !Sub 'https://${ArtifactBucket.DomainName}'
ClusterName:
Value: !Ref Cluster
ServiceName:
Value: !GetAtt Service.Name
RepoUserAccessKey:
Description: S3 User Access Key
Value: !Ref RepoUserKey
RepoUserSecretKey:
Description: S3 User Secret Key
Value: !GetAtt RepoUserKey.SecretAccessKey
BuildArn:
Description: CodeBuild URL
Value: !GetAtt Build.Arn
RepoArn:
Description: CodeCommit Repository ARN
Value: !GetAtt Repo.Arn
RepoName:
Description: CodeCommit Repository NAme
Value: !GetAtt Repo.Name
RepoCloneUrlHttp:
Description: CodeCommit HTTP Clone URL
Value: !GetAtt Repo.CloneUrlHttp
RepoCloneUrlSsh:
Description: CodeCommit SSH Clone URL
Value: !GetAtt Repo.CloneUrlSsh
PipelineUrl:
Description: CodePipeline URL
Value: !Sub https://console.aws.amazon.com/codepipeline/home?region=${AWS::Region}#/view/${Pipeline}
RegistryUri:
Description: ECR Repository URI
Value: !GetAtt Registry.RepositoryUri
TopicArn:
Description: CodeCommit Notification SNS Topic ARN
Value: !Ref Topic
Hope this helps!

CodePipeline failing with "The action failed because either the artifact or the Amazon S3 bucket could not be found."

I have the following pipeline definition:
Pipeline:
Type: 'AWS::CodePipeline::Pipeline'
Properties:
Name: !Ref AppName
RoleArn: {"Fn::GetAtt" : ["PipelineServiceRole", "Arn"] }
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: S3
OutputArtifacts:
- Name: "PipelineArtifact"
Configuration:
S3Bucket: !Ref ArtifactBucket
S3ObjectKey: !Ref ArtifactName
PollForSourceChanges: true
RunOrder: 1
-
Name: Deploy
Actions:
-
Name: DeployAction
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: ElasticBeanstalk
InputArtifacts:
- Name: "PipelineArtifact"
Configuration:
ApplicationName: !Ref EbApplication
EnvironmentName: !Ref EbEnvironment
The source stage completes successfully but then I get this error in the Deploy stage:
I already checked that the artifact is at the expected location in S3 and the PipelineServiceRole has full permissions (literally */*).
What could be causing this error?
For Provider: ElasticBeanstalk, the S3ObjectKey MUST point to a .zip file.
(and no, a .jar won't work)

Set CodeBuild env var from previous step in codepipeline

I have the following Codepipeline (cloudformation template snippet).
My BuildDeployWebappTest step needs the name of an s3 bucket created in the previous ExecuteChangeSet step.
As you can see BuildDeployWebappTest step uses the WebappCodeBuildProjectTestStage CodeBuild project config. The WEBAPP_S3_BUCKET env var is what needs to be set to the value of the bucket name created in the ExecuteChangeSet step.
How can I make this happen?
Currently I have to create S3 bucket outside of this pipeline and "hard code" the S3 bucket name as a CloudFormation parameter (TestStageS3Bucket) to the pipeline below.
The ExecuteChangeSet produces a CloudFormation output named WebAppBucket. I can export it if needed.
I know CodeBuild supports multiple InputArtifacts, but I don't know how I'd reference a CloudFormation OutputArtifact in the CodeBuild project config OR within the CodeBuild buildspec.yml file itself.
...
WebappCodeBuildProjectTestStage:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
PrivilegedMode: false
Type: LINUX_CONTAINER
Image: !Ref CodeBuildImage
EnvironmentVariables:
- Name: WEBAPP_S3_BUCKET
Value: !Ref TestStageS3Bucket
- Name: APP_STAGE
Value: test
- Name: ANGULAR_BUILD
Value: test
ServiceRole: !Ref CodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref WebappBuildspecPath
TimeoutInMinutes: !Ref BuildTimeout
...
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
ArtifactStore:
Location: !Ref 'ArtifactStoreBucket'
Type: S3
DisableInboundStageTransitions: []
Name: !Ref AWS::StackName
RoleArn: !GetAtt PipelineRole.Arn
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: '1'
OutputArtifacts:
- Name: MyAppCode
Configuration:
Owner: !Ref GithubOrg
Repo: !Select [ 0, !Split [ '--', !Ref 'AWS::StackName' ] ]
PollForSourceChanges: false
Branch: !Select [ 1, !Split [ '--', !Ref 'AWS::StackName' ] ]
OAuthToken: !Ref GithubOAuthToken
RunOrder: 1
- Name: DeployTestResources
Actions:
- Name: CreateChangeSet
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
InputArtifacts:
- Name: MyAppCode
Configuration:
ActionMode: CHANGE_SET_REPLACE
RoleArn: !GetAtt CFNRole.Arn
Capabilities: CAPABILITY_IAM
StackName: !Sub "${AWS::StackName}--test--gen"
ChangeSetName: !Sub "${AWS::StackName}--test--changeset"
TemplatePath: MyAppCode::aws/cloudformation/template.yml
TemplateConfiguration: !Sub "MyAppCode::${TestCloudFormationTemplateParameters}"
RunOrder: 1
- Name: ExecuteChangeSet
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
ActionMode: CHANGE_SET_EXECUTE
RoleArn: !GetAtt CFNRole.Arn
StackName: !Sub "${AWS::StackName}--test--gen"
ChangeSetName: !Sub "${AWS::StackName}--test--changeset"
OutputFileName: TestOutput.json
OutputArtifacts:
- Name: DeployTestResourcesOutput
RunOrder: 2
- Name: BuildDeployWebappTest
Actions:
- Name: CodeBuild
InputArtifacts:
- Name: MyAppCode
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: '1'
Configuration:
ProjectName: !Ref WebappCodeBuildProjectTestStage
RunOrder: 1

AWS CloudFormation CodePipeline: Could not fetch the contents of the repository from GitHub

I'm attempting to setup an AWS CloudFormation configuration using CodePipeline and GitHub.
I've failed both at my own example project and the tutorial: Create a GitHub Pipeline with AWS CloudFormation.
All resources are created, but in CodePipeline I continuously get the following error during the initial "Source" stage.
Could not fetch the contents of the repository from GitHub.
See image below:
Note that this is not a permissions error. It's something else that does not exist on Google until now.
GitHub can be configured to work if I stop using CloudFormation and create a CodePipeline through the console, but for my purposes, I need to use CloudFormation. Need to stick to a template.
Here is the template from the CloudFormation template copied from the tutorial:
Parameters:
BranchName:
Description: GitHub branch name
Type: String
Default: master
RepositoryName:
Description: GitHub repository name
Type: String
Default: test
GitHubOwner:
Type: String
GitHubSecret:
Type: String
NoEcho: true
GitHubOAuthToken:
Type: String
NoEcho: true
ApplicationName:
Description: CodeDeploy application name
Type: String
Default: DemoApplication
BetaFleet:
Description: Fleet configured in CodeDeploy
Type: String
Default: DemoFleet
Resources:
CodePipelineArtifactStoreBucket:
Type: "AWS::S3::Bucket"
CodePipelineArtifactStoreBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref CodePipelineArtifactStoreBucket
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: DenyUnEncryptedObjectUploads
Effect: Deny
Principal: "*"
Action: "s3:PutObject"
Resource: !Join
- ""
- - !GetAtt
- CodePipelineArtifactStoreBucket
- Arn
- /*
Condition:
StringNotEquals:
"s3:x-amz-server-side-encryption": "aws:kms"
- Sid: DenyInsecureConnections
Effect: Deny
Principal: "*"
Action: "s3:*"
Resource: !Join
- ""
- - !GetAtt
- CodePipelineArtifactStoreBucket
- Arn
- /*
Condition:
Bool:
"aws:SecureTransport": false
AppPipelineWebhook:
Type: "AWS::CodePipeline::Webhook"
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
- JsonPath: $.ref
MatchEquals: "refs/heads/{Branch}"
TargetPipeline: !Ref AppPipeline
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt
- AppPipeline
- Version
RegisterWithThirdParty: true
AppPipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
Name: github-events-pipeline
RoleArn: !GetAtt
- CodePipelineServiceRole
- Arn
Stages:
- Name: Source
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: false
RunOrder: 1
- Name: Beta
Actions:
- Name: BetaAction
InputArtifacts:
- Name: SourceOutput
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
Configuration:
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref BetaFleet
RunOrder: 1
ArtifactStore:
Type: S3
Location: !Ref CodePipelineArtifactStoreBucket
CodePipelineServiceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
Action: "sts:AssumeRole"
Path: /
Policies:
- PolicyName: AWS-CodePipeline-Service-3
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- "codecommit:CancelUploadArchive"
- "codecommit:GetBranch"
- "codecommit:GetCommit"
- "codecommit:GetUploadArchiveStatus"
- "codecommit:UploadArchive"
Resource: "*"
- Effect: Allow
Action:
- "codedeploy:CreateDeployment"
- "codedeploy:GetApplicationRevision"
- "codedeploy:GetDeployment"
- "codedeploy:GetDeploymentConfig"
- "codedeploy:RegisterApplicationRevision"
Resource: "*"
- Effect: Allow
Action:
- "codebuild:BatchGetBuilds"
- "codebuild:StartBuild"
Resource: "*"
- Effect: Allow
Action:
- "devicefarm:ListProjects"
- "devicefarm:ListDevicePools"
- "devicefarm:GetRun"
- "devicefarm:GetUpload"
- "devicefarm:CreateUpload"
- "devicefarm:ScheduleRun"
Resource: "*"
- Effect: Allow
Action:
- "lambda:InvokeFunction"
- "lambda:ListFunctions"
Resource: "*"
- Effect: Allow
Action:
- "iam:PassRole"
Resource: "*"
- Effect: Allow
Action:
- "elasticbeanstalk:*"
- "ec2:*"
- "elasticloadbalancing:*"
- "autoscaling:*"
- "cloudwatch:*"
- "s3:*"
- "sns:*"
- "cloudformation:*"
- "rds:*"
- "sqs:*"
- "ecs:*"
Resource: "*"
I've have taken the following steps:
provided Github Organization, Repo & Branch
setup a personal access token on GitHub and supplied it to the template GitHubOAuthToken parameter with access to repo:all & admin:repo_hook
setup a random string and provided it to the GitHubSecret
tried not including a GitHubSecret as in many other examples
verified that AWS CodePipeline for my region is listed in Github Applications under "Authorized OAuth Applications"
In attempts to start from a clear slate, I've also done the following:
cleared all GitHub webhooks before starting aws codepipeline list-webhooks & aws codepipeline delete-webhook --name
added a new personal access token
tried multiple repos & branches
Any ideas how I can get GitHub to work with CloudFormation & CodePipeline?
Found the solution. The Github organization name is case sensitive.