Nullify VPC Config, if the account number is XXXXXX - aws-cloudformation

I have a below cloudformation code, i dont want to setup a VPC Configuration if the account is XXXXX
AWSTemplateFormatVersion: '2010-09-09'
Description: VPC function.
Mappings:
"129921924252":
"ap-southeast-1":
Subnets:
- "vpc-PrivateSubnetId1"
- "vpc-PrivateSubnetId2"
Conditions:
CheckAccount: {"Fn::Equals": [ !Ref AWS::AccountId, "123456780976"]}
Resources:
Function:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Role: arn:aws:iam::129921924252:role/service-role/ASG-Lambda-role-n27an6nj
Code:
ZipFile:
Fn::Sub: |
import json
import boto3
import os
from botocore.exceptions import ClientError
Runtime: nodejs12.x
Timeout: 5
TracingConfig:
Mode: Active
VpcConfig:
- Fn::If:
- CheckAccount
SecurityGroupIds:
- sg-0305baf25b2dd4891
SubnetIds:
-
Fn::ImportValue:
Fn::Select: [0, Fn::FindInMap : [!Ref AWS::AccountId, !Ref AWS::Region, Subnets]]
-
Fn::ImportValue:
Fn::Select: [1, Fn::FindInMap : [!Ref AWS::AccountId, !Ref AWS::Region, Subnets]]
- !Ref AWS::NoValue
Im getting error when trying above code:
Properties validation failed for resource Function with message: #/VpcConfig: expected type: JSONObject, found: JSONArray

The syntax of your Fn::If statement seems to be incorrect. Have a look at the following example from the documentation:
UpdatePolicy:
AutoScalingRollingUpdate:
!If
- RollingUpdates
-
MaxBatchSize: 2
MinInstancesInService: 2
PauseTime: PT0M30S
- !Ref "AWS::NoValue"
So in your case that would be:
VpcConfig:
- Fn::If:
- CheckAccount
- SecurityGroupIds:
- sg-0305baf25b2dd4891
SubnetIds:
- Fn::ImportValue:
Fn::Select: [0, Fn::FindInMap : [!Ref AWS::AccountId, !Ref AWS::Region, Subnets]]
- Fn::ImportValue:
Fn::Select: [1, Fn::FindInMap : [!Ref AWS::AccountId, !Ref AWS::Region, Subnets]]
- !Ref AWS::NoValue

Related

AWS Cloudformation - security group ids list export and import - SecurityGroupIds not valid

Im working with 2 nested stacks. I need to use security group ids exported from NestedA in NestedB. The exported security group ids are to be used in a SecurityGroupIds property in NestedB based on conditions.
However cloudformation returns error: Property validation failure: [Value of property {/LaunchTemplateData/SecurityGroupIds/0} does not match type {String}]
The following are snippets of what I have tried:
NestedA export
Outputs:
SG1
Value: !Join
- ','
- - !Ref securitygroup1
- !Ref securitygroup2
Export:
Name: !Sub ${ExportVpcStackName}-SG1
SG2
Value: !Join
- ','
- - !Ref securitygroup3
- !Ref securitygroup4
Export:
Name: !Sub ${ExportVpcStackName}-SG2
ParentStack
Resources:
...
launchtemplate:
Type: AWS::Cloudformation::Stack
Properties:
TemplateURL: https://s3/nestedB.yaml
...
SG1:
Fn::ImportValue: !Sub ${ExportVpcStackName}-SG1
SG2:
Fn::ImportValue: !Sub ${ExportVpcStackName}-SG2
NestedB import
Parameters:
SG1
Type: List<AWS::EC2::SecurityGroup::Id>
SG2
Type: List<AWS::EC2::SecurityGroup::Id>
Resources:
launchtemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
...
SecurityGroupIds:
!If
- Condition1
-
- !Ref SG1
- !Ref SG2
- !If
- Condition2
-
- !Ref SG1
- !Ref AWS::NoValue
Ive also tried importing each of the security groups directly/individually into NestedB with no success ie:
NestedA export
Outputs:
securitygroup1:
Value: !Ref securitygroup1
Export:
Name: !Sub ${ExportVpcStackName}-securitygroup1
securitygroup2:
Value: !Ref securitygroup2
Export:
Name: !Sub ${ExportVpcStackName}-securitygroup2
securitygroup3:
Value: !Ref securitygroup3
Export:
Name: !Sub ${ExportVpcStackName}-securitygroup3
securitygroup4:
Value: !Ref securitygroup4
Export:
Name: !Sub ${ExportVpcStackName}-securitygroup4
NestedB import
Resources:
launchtemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateData:
...
SecurityGroupIds:
!If
- Condition1
-
- Fn::ImportValue: !Sub ${ExportVpcStackName}-securitygroup1
- Fn::ImportValue: !Sub ${ExportVpcStackName}-securitygroup2
- Fn::ImportValue: !Sub ${ExportVpcStackName}-securitygroup3
- Fn::ImportValue: !Sub ${ExportVpcStackName}-securitygroup4
- !If
- Condition2
-
- Fn::ImportValue: !Sub ${ExportVpcStackName}-securitygroup1
- Fn::ImportValue: !Sub ${ExportVpcStackName}-securitygroup2
- !Ref AWS::NoValue
Whats the error Im making?
Edit: I have tried #marcin suggestion but still get the error:
Property validation failure: [Value of property {/LaunchTemplateData/SecurityGroupIds/0} does not match type {String}]
Instead of Type: List<AWS::EC2::SecurityGroup::Id>, please use CommaDelimitedList.
Also your SG1 is a list of SGs. You have to use Fn::Select to get individual SG values from the list.

Export and ImportValue in another stack under sub

I am trying to get a value from one stack to another using the below syntax.
stack one-
Outputs:
CompRestAPI:
Description: Rest Api Id
Value: !Ref CompRestAPI
Export:
Name: 'CompRestAPI'
Stack two -
CompRestApiWaf:
Type: AWS::WAFv2::WebACLAssociation
DependsOn: CompApiGatewayStage
Properties:
RestApiId: !ImportValue 'CompRestAPI'
ResourceArn: !Sub 'arn:aws:apigateway:${REGION}:/${RestApiId}/${STAGENAME}-apistage'
WebACLArn: !Ref WafId
I am able to get the values for other resources using 1st syntax, but I am not able to get the value for RestApiId under !Sub
RestApiId: !ImportValue 'CompRestAPI'
ResourceArn: !Sub 'arn:aws:apigateway:${REGION}:/${RestApiId}/apistage'
So is there any way to use !ImportValue under !Sub condition?
I tried it using below code, validation is pass but still showing me an error
Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashes., field: RESOURCE_ARN, parameter:
CompRestApiWaf:
Type: AWS::WAFv2::WebACLAssociation
DependsOn: CompApiGatewayStage
Properties:
ResourceArn: !Sub 'arn:aws:apigateway:${REGION}:/{!ImportValue CompRestAPI}/stages/apistage'
WebACLArn: !Ref WafId
I am done with it using Fn::join:
SourceArn:
Fn::Join:
- ""
- - 'arn:aws:execute-api:'
- !Ref AWS::Region
- ':'
- !Ref AWS::AccountId
- ':'
- !Ref ApiGatewayRestApiResource
- '/*'
this should work
ResourceArn: !Sub
- 'arn:aws:apigateway:${REGION}:/${CompRestAPI}/stages/apistage'
- CompRestAPI: !ImportValue CompRestAPI
you can expand the second parameter to have multiple keys for multiple imports like so
SecretString: !Sub
- 'postgres://${username}:${password}#${dbhost}:${dbport}/${dbname}'
- username: !Ref 'DBUser'
password: !Ref 'DBPassword'
dbhost: !Ref DbMasterDnsEntry
dbport: !GetAtt AuroraPgCluster.Endpoint.Port
dbname: !Ref 'DBName'

AWS CloudFormation Fn::ImportValue doesn't like !Join?

I have the following resource in my CloudFormation template that's trying to create a listener rule. The idea is, based on the passed-in EnvironmentType, and the AWS Region, I want to import the listener ARN from the appropriate CloudFormation stack that exported it.
Parameters:
EnvironmentType:
Type: String
Default: "sandbox"
ECSClusterStackNameParameter:
Type: String
Default: "ECS-US-Sandbox"
Mappings:
production:
us-east-1:
stackWithAlbListenerInfo: "ECS-US-Prod"
eu-north-1:
stackWithAlbListenerInfo: "ECS-EU-Prod"
staging:
us-east-1:
stackWithAlbListenerInfo: "ECS-US-Staging"
eu-north-1:
stackWithAlbListenerInfo: ""
sandbox:
us-east-1:
stackWithAlbListenerInfo: "ECS-US-Sandbox"
eu-north-1:
stackWithAlbListenerInfo: ""
Conditions:
StackExists:
!Not [ !Equals [ !FindInMap [ !Ref EnvironmentType, !Ref "AWS::Region", stackWithAlbListenerInfo ], ""] ]
Resources:
AlbListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Condition: UseListenerRule
Properties:
ListenerArn:
!If
- StackExists
-
- Fn::ImportValue:
!Join
- "-"
- - !FindInMap [ !Ref EnvironmentType, !Ref "AWS::Region", stackWithAlbListenerInfo ]
- "ListenerArn"
- Fn::ImportValue:
- !Sub "${ECSClusterStackNameParameter}-ListenerArn"
However, it fails validation due to this error, and seems like the first Fn::ImportValue: doesn't like the !Join. But !Join returns a concatenated string correct? What am I missing?
ERROR: Service: marcom-stats-service, cfnUpdate error: com.amazonaws.services.cloudformation.model.AmazonCloudFormationException: Template error: the attribute in Fn::ImportValue must be a string or a function that returns a string (Service: AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request ID: 2678552e-cf6c-46e1-b640-a7c07de385c2; Proxy: null)
UPDATE:
Though Robert Kossendey's answer fixed my error, my original template was wrong. This is really what I wanted to do. I hope it helps someone.
AlbListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Condition: UseListenerRule
Properties:
ListenerArn:
Fn::ImportValue: !Sub
- ${StackName}-ListenerArn
- { StackName: !If [ StackExists, !FindInMap [ !Ref EnvironmentType, !Ref "AWS::Region", stackWithAlbListenerInfo ], !Ref ECSClusterStackNameParameter ] }
Now I see it I think. At the second import you need to remove the dash in front of the !Sub:
- Fn::ImportValue:
!Sub "${ECSClusterStackNameParameter}-ListenerArn"

How to create an ECS task in CloudFormation before the CodePipeline is created

I'm trying to define my ECS stack in Cloudformation, including the CI/CD pipeline and ECR repository. However you run into a bit of a conundrum in that:
To create an ECS task definition (AWS::ECS::TaskDefinition) you have to first create a populated ECR repository (AWS::ECR::Repository) so that you can specify the Image property.
To populate this repository you have to first create the CodePipeline (AWS::CodePipeline::Pipeline) which will run automatically on creation.
To create the pipeline you have to first create the ECS task definition / cluster as the pipeline needs to deploy onto it (back to step 1).
The solutions to this I can see are:
Don't create the ECR repository in Cloudformation & pass it as a parameter to the stacks.
Define a dummy image in the task definition to deploy the first time and then create the pipeline which will create the real ECR repository and deploy the real image.
Create the CodeBuild project and ECR repository in a separate stack, trigger the CodeBuild project with a lambda function (I don't think it runs automatically on creation like the pipeline does), create the ECS cluster and then create the pipeline. This seems more complicated than it should be.
Are there any better ways of approaching this problem?
The way I do it is to ECR repository first, but still using CloudFormation. So I have two templates. One for ECR repo. And the second one for the rest. The ECR repo is passed as a parameter to the second template. But you can also export its Uri to be ImportValue in the second step. The Uri is created as follows:
Uri:
Value: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${MyECR}"
You will also need some initial image in the repo for the task definition. This you can automate by having separated CodeBuild project (no need for CodePipeline) for this initial build.
Another way to create this by using a single stack is to trigger the Fargate deployment after pushing the image by making an initial commit to the CodeCommit repository and setting the DesiredCount property of the ECS service to zero:
Repo:
Type: AWS::CodeCommit::Repository
Properties:
Code:
BranchName: main
S3:
Bucket: some-bucket
Key: code.zip
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryDescription: Repository
Triggers:
- Name: Trigger
CustomData: The Code Repository
DestinationArn: !Ref Topic
Branches:
- main
Events: [all]
Service:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
DesiredCount: 0
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref SecG
Subnets: !Ref Subs
ServiceName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
TaskDefinition: !Ref TaskDefinition
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
BuildSpec: !Sub |
version: 0.2
phases:
pre_build:
commands:
- echo "[`date`] PRE_BUILD"
- echo "Logging in to Amazon ECR..."
- aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
- IMAGE_URI="$ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPO:$TAG"
build:
commands:
- echo "[`date`] BUILD"
- echo "Building Docker Image..."
- docker build -t $REPO:$TAG .
- docker tag $REPO:$TAG $IMAGE_URI
post_build:
commands:
- echo "[`date`] POST_BUILD"
- echo "Pushing Docker Image..."
- docker push $IMAGE_URI
- echo Writing image definitions file...
- printf '[{"name":"svc","imageUri":"%s"}]' $IMAGE_URI > $FILE
artifacts:
files: $FILE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:6.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: REGION
Type: PLAINTEXT
Value: !Ref AWS::Region
- Name: ACCOUNT
Type: PLAINTEXT
Value: !Ref AWS::AccountId
- Name: TAG
Type: PLAINTEXT
Value: latest
- Name: REPO
Type: PLAINTEXT
Value: !Ref Registry
- Name: FILE
Type: PLAINTEXT
Value: !Ref ImagesFile
PrivilegedMode: true
Name: !Ref AWS::StackName
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Site
ActionTypeId:
Category: Source
Owner: AWS
Version: '1'
Provider: CodeCommit
Configuration:
RepositoryName: !GetAtt Repo.Name
BranchName: main
PollForSourceChanges: 'false'
InputArtifacts: []
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Docker
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
RunOrder: 1
- Name: Deploy
Actions:
- Name: Fargate
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: ECS
Configuration:
ClusterName: !Ref Cluster
FileName: !Ref ImagesFile
ServiceName: !GetAtt Service.Name
InputArtifacts:
- Name: BuildArtifact
RunOrder: 1
Note that the some-bucket S3 bucket needs to contain the zipped .Dockerfile and any source code without any .git directory included.
If you use another service for your repo, like GitHub for instance, or your already have a repo, simple remove the section and configure the pipeline as required.
The entire CloudFormation stack is listed below for reference:
AWSTemplateFormatVersion: '2010-09-09'
Description: CloudFormation Stack to Trigger CodeBuild via CodePipeline
Parameters:
SecG:
Description: Single security group
Type: AWS::EC2::SecurityGroup::Id
Subs:
Description: Comma separated subnet IDs
Type: List<AWS::EC2::Subnet::Id>
ImagesFile:
Type: String
Default: images.json
Resources:
ArtifactBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
Tags:
- Key: UseWithCodeDeploy
Value: true
CodeBuildServiceRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub 'ssm-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- ssm:GetParameters
- secretsmanager:GetSecretValue
Resource: '*'
- PolicyName: !Sub 'logs-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: '*'
- PolicyName: !Sub 'ecr-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- ecr:BatchCheckLayerAvailability
- ecr:CompleteLayerUpload
- ecr:GetAuthorizationToken
- ecr:InitiateLayerUpload
- ecr:PutImage
- ecr:UploadLayerPart
- lightsail:*
Resource: '*'
- PolicyName: !Sub bkt-${ArtifactBucket}-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- s3:ListBucket
- s3:GetBucketLocation
- s3:ListBucketVersions
- s3:GetBucketVersioning
Resource:
- !Sub arn:aws:s3:::${ArtifactBucket}
- arn:aws:s3:::some-bucket
- PolicyName: !Sub obj-${ArtifactBucket}-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:GetObjectAcl
- s3:PutObjectAcl
- s3:GetObjectTagging
- s3:PutObjectTagging
- s3:GetObjectVersion
- s3:GetObjectVersionAcl
- s3:PutObjectVersionAcl
Resource:
- !Sub arn:aws:s3:::${ArtifactBucket}/*
- arn:aws:s3:::some-bucket/*
CodeDeployServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Sid: '1'
Effect: Allow
Principal:
Service:
- codedeploy.us-east-1.amazonaws.com
- codedeploy.eu-west-1.amazonaws.com
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeDeployRoleForECS
- arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole
- arn:aws:iam::aws:policy/service-role/AWSCodeDeployRoleForLambda
CodeDeployRolePolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: !Sub 'CDPolicy-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Statement:
- Effect: Allow
Resource:
- '*'
Action:
- ec2:Describe*
- Effect: Allow
Resource:
- '*'
Action:
- autoscaling:CompleteLifecycleAction
- autoscaling:DeleteLifecycleHook
- autoscaling:DescribeLifecycleHooks
- autoscaling:DescribeAutoScalingGroups
- autoscaling:PutLifecycleHook
- autoscaling:RecordLifecycleActionHeartbeat
Roles:
- !Ref CodeDeployServiceRole
CodePipelineServiceRole:
Type: AWS::IAM::Role
Properties:
Path: /
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub 'root-${AWS::Region}-${AWS::StackName}'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Resource:
- !Sub 'arn:aws:s3:::${ArtifactBucket}/*'
- !Sub 'arn:aws:s3:::${ArtifactBucket}'
Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketAcl
- s3:GetBucketLocation
- Resource: "*"
Effect: Allow
Action:
- ecs:*
- Resource: "*"
Effect: Allow
Action:
- iam:PassRole
Condition:
StringLike:
iam:PassedToService:
- ecs-tasks.amazonaws.com
- Resource: !GetAtt Build.Arn
Effect: Allow
Action:
- codebuild:BatchGetBuilds
- codebuild:StartBuild
- codebuild:BatchGetBuildBatches
- codebuild:StartBuildBatch
- Resource: !GetAtt Repo.Arn
Effect: Allow
Action:
- codecommit:CancelUploadArchive
- codecommit:GetBranch
- codecommit:GetCommit
- codecommit:GetRepository
- codecommit:GetUploadArchiveStatus
- codecommit:UploadArchive
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- CodeCommit Repository State Change
resources:
- !GetAtt Repo.Arn
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- main
Targets:
-
Arn: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-Pipeline
Topic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Endpoint: user#example.com
Protocol: email
TopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
-
Sid: AllowPublish
Effect: Allow
Principal:
Service:
- 'codestar-notifications.amazonaws.com'
Action:
- 'SNS:Publish'
Resource:
- !Ref Topic
Topics:
- !Ref Topic
Repo:
Type: AWS::CodeCommit::Repository
Properties:
Code:
BranchName: main
S3:
Bucket: some-bucket
Key: code.zip
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryDescription: Repository
Triggers:
- Name: Trigger
CustomData: The Code Repository
DestinationArn: !Ref Topic
Branches:
- main
Events: [all]
RepoUser:
Type: AWS::IAM::User
Properties:
Path: '/'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeCommitPowerUser
RepoUserKey:
Type: AWS::IAM::AccessKey
Properties:
UserName:
!Ref RepoUser
Registry:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
RepositoryPolicyText:
Version: '2012-10-17'
Statement:
- Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- !GetAtt CodeDeployServiceRole.Arn
Action:
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:BatchCheckLayerAvailability
- ecr:PutImage
- ecr:InitiateLayerUpload
- ecr:UploadLayerPart
- ecr:CompleteLayerUpload
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Source:
Type: CODEPIPELINE
BuildSpec: !Sub |
version: 0.2
phases:
pre_build:
commands:
- echo "[`date`] PRE_BUILD"
- echo "Logging in to Amazon ECR..."
- aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com
- IMAGE_URI="$ACCOUNT.dkr.ecr.$REGION.amazonaws.com/$REPO:$TAG"
build:
commands:
- echo "[`date`] BUILD"
- echo "Building Docker Image..."
- docker build -t $REPO:$TAG .
- docker tag $REPO:$TAG $IMAGE_URI
post_build:
commands:
- echo "[`date`] POST_BUILD"
- echo "Pushing Docker Image..."
- docker push $IMAGE_URI
- echo Writing image definitions file...
- printf '[{"name":"svc","imageUri":"%s"}]' $IMAGE_URI > $FILE
artifacts:
files: $FILE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:6.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: REGION
Type: PLAINTEXT
Value: !Ref AWS::Region
- Name: ACCOUNT
Type: PLAINTEXT
Value: !Ref AWS::AccountId
- Name: TAG
Type: PLAINTEXT
Value: latest
- Name: REPO
Type: PLAINTEXT
Value: !Ref Registry
- Name: FILE
Type: PLAINTEXT
Value: !Ref ImagesFile
PrivilegedMode: true
Name: !Ref AWS::StackName
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Site
ActionTypeId:
Category: Source
Owner: AWS
Version: '1'
Provider: CodeCommit
Configuration:
RepositoryName: !GetAtt Repo.Name
BranchName: main
PollForSourceChanges: 'false'
InputArtifacts: []
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Docker
ActionTypeId:
Category: Build
Owner: AWS
Version: '1'
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildArtifact
RunOrder: 1
- Name: Deploy
Actions:
- Name: Fargate
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: ECS
Configuration:
ClusterName: !Ref Cluster
FileName: !Ref ImagesFile
ServiceName: !GetAtt Service.Name
InputArtifacts:
- Name: BuildArtifact
RunOrder: 1
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
FargateTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
TaskRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
-
Name: svc
Image: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Registry}:latest
PortMappings:
- ContainerPort: 8080
Cpu: 256
ExecutionRoleArn: !Ref FargateTaskExecutionRole
Memory: 512
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
RuntimePlatform:
CpuArchitecture: ARM64
OperatingSystemFamily: LINUX
TaskRoleArn: !Ref TaskRole
Service:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
DesiredCount: 0
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref SecG
Subnets: !Ref Subs
ServiceName: !Select [4, !Split ['-', !Select [2, !Split ['/', !Ref AWS::StackId]]]]
TaskDefinition: !Ref TaskDefinition
Outputs:
ArtifactBucketName:
Description: ArtifactBucket S3 Bucket Name
Value: !Ref ArtifactBucket
ArtifactBucketSecureUrl:
Description: ArtifactBucket S3 Bucket Domain Name
Value: !Sub 'https://${ArtifactBucket.DomainName}'
ClusterName:
Value: !Ref Cluster
ServiceName:
Value: !GetAtt Service.Name
RepoUserAccessKey:
Description: S3 User Access Key
Value: !Ref RepoUserKey
RepoUserSecretKey:
Description: S3 User Secret Key
Value: !GetAtt RepoUserKey.SecretAccessKey
BuildArn:
Description: CodeBuild URL
Value: !GetAtt Build.Arn
RepoArn:
Description: CodeCommit Repository ARN
Value: !GetAtt Repo.Arn
RepoName:
Description: CodeCommit Repository NAme
Value: !GetAtt Repo.Name
RepoCloneUrlHttp:
Description: CodeCommit HTTP Clone URL
Value: !GetAtt Repo.CloneUrlHttp
RepoCloneUrlSsh:
Description: CodeCommit SSH Clone URL
Value: !GetAtt Repo.CloneUrlSsh
PipelineUrl:
Description: CodePipeline URL
Value: !Sub https://console.aws.amazon.com/codepipeline/home?region=${AWS::Region}#/view/${Pipeline}
RegistryUri:
Description: ECR Repository URI
Value: !GetAtt Registry.RepositoryUri
TopicArn:
Description: CodeCommit Notification SNS Topic ARN
Value: !Ref Topic
Hope this helps!

Trying to add additional EBS volumes to MarkLogic Cluster Cloud Formation Template

New the Yaml and Cloud Formation. Trying to utilize MarkLogics template for deploying a clustered MarkLogic DB utilizing our own VPC. We have the cluster working but have gotten to the point where we would like to mount an additional volume to save backups too.
Were added additional volumes:
MarklogicVolume1root:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !Select [0, !Ref AZ]
Size: !Ref VolumeSize
Tags:
- Key: Name
Value: MarkLogic-GroupA-Host1-Volume1Aroot
VolumeType: !Ref VolumeType
Encrypted: !If [UseVolumeEncryption, 'true', 'false']
KmsKeyId: !If [HasCustomEBSKey, !Ref VolumeEncryptionKey, !Ref 'AWS::NoValue']
Metadata:
'AWS::CloudFormation::Designer':
id: c81032f7-b0ec-47ca-a236-e24d57b49ae3
MarklogicVolume1data:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !Select [0, !Ref AZ]
Size: !Ref VolumeSizeData
Tags:
- Key: Name
Value: MarkLogic-GroupA-Host1-Volume1Adata
VolumeType: !Ref VolumeType
Encrypted: !If [UseVolumeEncryption, 'true', 'false']
KmsKeyId: !If [HasCustomEBSKey, !Ref VolumeEncryptionKey, !Ref 'AWS::NoValue']
MarklogicVolume1backup:
Type: 'AWS::EC2::Volume'
Properties:
AvailabilityZone: !Select [0, !Ref AZ]
Size: !Ref VolumeSizeBackup
Tags:
- Key: Name
Value: MarkLogic-GroupA-Host1-Volume1Abackup
VolumeType: !Ref VolumeType
Encrypted: !If [UseVolumeEncryption, 'true', 'false']
KmsKeyId: !If [HasCustomEBSKey, !Ref VolumeEncryptionKey, !Ref 'AWS::NoValue']
Updated the blockmapping within the Launch Configuration and the User Data script:
LaunchConfig1:
Type: 'AWS::AutoScaling::LaunchConfiguration'
DependsOn:
- InstanceSecurityGroup
Properties:
BlockDeviceMappings:
- DeviceName: !Ref MarklogicVolume1root
NoDevice: true
Ebs: {}
- DeviceName: !Ref MarklogicVolume1data
NoDevice: true
Ebs: {}
- DeviceName: !Ref MarklogicVolume1backup
NoDevice: true
Ebs: {}
KeyName: !Ref KeyName
ImageId: !If [EssentialEnterprise, !FindInMap [LicenseRegion2AMI,!Ref 'AWS::Region',"Enterprise"], !FindInMap [LicenseRegion2AMI, !Ref 'AWS::Region', "BYOL"]]
UserData: !Base64
'Fn::Join':
- ''
- - MARKLOGIC_CLUSTER_NAME=
- !Ref MarkLogicDDBTable
- |+
- MARKLOGIC_EBS_VOLUME1=
- !Ref MarklogicVolume1root
- ',:'
- !Ref VolumeSize
- '::'
- !Ref VolumeType
- |
::,*
- |
- MARKLOGIC_EBS_VOLUME2=
- !Ref MarklogicVolume1data
- ',:'
- !Ref VolumeSizeData
- '::'
- !Ref VolumeType
- |
::,*
- |
- MARKLOGIC_EBS_VOLUME3=
- !Ref MarklogicVolume1backup
- ',:'
- !Ref VolumeSizeBackup
- '::'
- !Ref VolumeType
- |
::,*
- |
MARKLOGIC_NODE_NAME=NodeA#
- MARKLOGIC_ADMIN_USERNAME=
- !Ref AdminUser
- |+
- MARKLOGIC_ADMIN_PASSWORD=
- !Ref AdminPass
- |+
- |
MARKLOGIC_CLUSTER_MASTER=1
- MARKLOGIC_LICENSEE=
- !Ref Licensee
- |+
- MARKLOGIC_LICENSE_KEY=
- !Ref LicenseKey
- |+
- MARKLOGIC_LOG_SNS=
- !Ref LogSNS
- |+
- !If
- UseVolumeEncryption
- !Join
- ''
- - 'MARKLOGIC_EBS_KEY='
- !If
- HasCustomEBSKey
- !Ref VolumeEncryptionKey
- 'default'
- ''
We are able to deploy the additional volumes but they are not mounting. This is also interrupting the final configuration of the Ec2 instances as well because they also fail their load balancer health checks. Any help or insight is greatly appreciated!
The documentation talks about the steps needed to add an EBS volume to an instance.
Creating an EBS Volume and Attaching it to an Instance
The brief over view is you need to:
Create the Volume
Attach the Volume (as /dev/sdf or higher)
Run init-volumes-from-system
Once that is done, you may also want to manually check the DynamoDB table associated with the CF Template, and ensure the entries have the updated volume information from the host.
Michael's answer is how to add a EBS volume after a stack is created, this is different from the question which is how to define a CF template which pre-defines volumes. If the volume is created but not mounted I recommend looking into the system logs, both marklogic logs and /var/log/messages for indications. If that does not provide sufficient information to resolve then you should open a support ticket for assistance.