Cloud formation configuration depending on the region being created - aws-cloudformation

Which is the best approach for having a cloud formation template (CFT) use different values for configuration in the mappings depending on the region it's being created?
For example, let's assume I have a service deployed in 2 different regions (Europe and America). Each instance of the service writes to its own DynamoDB (DDB) table, lets call it Data. In order to create the DDB I use a CFT. Since the traffic is not the same in both regions I would like to set different capacity units for the table.
For the above, I could add a parameter to the CFT to state if the template is for Europe or America, create a mapping with the desired valued keyed by the parameter value, and depending on it get one or the other.
Like:
AWSTemplateFormatVersion: 2010-09-09
Resources:
DdbData:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
TableName: 'Data'
BillingMode: PROVISIONED
ProvisionedThroughput:
ReadCapacityUnits: !FindInMap
- DdbDataCapacityUnits
- 'Read'
- !Ref Continent
WriteCapacityUnits: !FindInMap
- DdbDataCapacityUnits
- 'Write'
- !Ref Continent
Mappings:
DdbDataCapacityUnits:
Read:
Europe: '10'
America: '5'
Write:
Europe: '5'
America: '10'
Parameters:
Continent:
Description: The continent in which the stack is being created, either Europe or America
Type: String
AllowedValues:
- Europe
- America
Default: Europe
However, since I'm managing multiple regions with the same CFT I would like to use StacksSets to update both of them as one. The stacks for the stack set all have the same parameters, is just that the creation its done in multiple regions.
My approach was to use the pseudoparameter AWS::Region as the key in the configuration mapping like:
AWSTemplateFormatVersion: 2010-09-09
Resources:
DdbData:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
TableName: 'Data'
BillingMode: PROVISIONED
ProvisionedThroughput:
ReadCapacityUnits: !FindInMap
- DdbDataCapacityUnits
- 'Read'
- !Ref AWS::Region
WriteCapacityUnits: !FindInMap
- DdbDataCapacityUnits
- 'Write'
- !Ref AWS::Region
Mappings:
DdbDataCapacityUnits:
Read:
'eu-west-1': '10'
'us-east-1': '5'
Write:
'eu-west-1': '5'
'us-east-1': '10'
But this approach doesn't work as I get on error when creating the stack: Template validation error: Template format error: Mappings attribute name 'eu-west-1' must contain only alphanumeric characters.
Is there any approach in which I can achieve this?

You can swap the region and the Read/Write attribute.
For example, here is a mapping for finding AMIs:
Mappings:
AmazonLinuxEcsAMI:
us-east-1:
AMI: ami-07eb698ce660402d2
us-east-2:
AMI: ami-0a0c6574ce16ce87a
us-west-1:
AMI: ami-04c22ba97a0c063c4
us-west-2:
AMI: ami-09568291a9d6c804c
It can then be used with:
ImageId: !FindInMap [AmazonLinuxEcsAMI, !Ref 'AWS::Region', AMI]
So, try putting the region as the first level instead of Read, then putting Read and Write as the next level.

Related

how to pass bucket/key name to fargate job via a cloudwatch event trigger on s3 object creation event?

I have create a fargate task and trying to trigger it via s3 object creation event ( see sample below) via cloudformation.as it cannot trigger it directly, i have created a cloudwatchevent. I am trying to pass the bucket and obj name to my fargate task code . doing some research, i came across -> InputTransformer, but i'm not sure how to pass the value of my bucket and key name and how to read it in my python code. any help will be appreciated.
AWSTemplateFormatVersion: 2010-09-09
Description: An example CloudFormation template for Fargate.
Parameters:
VPC:
Type: AWS::EC2::VPC::Id
SubnetA:
Type: AWS::EC2::Subnet::Id
SubnetB:
Type: AWS::EC2::Subnet::Id
Image:
Type: String
Default: 123456789012.dkr.ecr.region.amazonaws.com/image:tag
Resources:
mybucket:
Properties:
BucketName: 'mytestbucket-us'
cloudwatchEvent:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail:
eventSource:
- s3.amazonaws.com
eventName:
- PutObject
- CompleteMultipartUpload
requestParameters:
bucketName:
- !Ref mybucket
Targets:
- Id: my-fargate-task
Arn: myclusterArn
RoleArn: myinvocationrolearn
Input:
'Fn::Sub':
- >-
{"containerOverrides": [{"name":"somecontainer"]}
EcsParameters:
TaskDefinition:
LaunchType: 'FARGATE'
...
NetworkConfiguration:
...
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Join ['', [!Ref ServiceName, Cluster]]
TaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: LogGroup
Properties:
Family: !Join ['', [!Ref ServiceName, TaskDefinition]]
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Cpu: 256
Memory: 2GB
ExecutionRoleArn: !Ref ExecutionRole
TaskRoleArn: !Ref TaskRole
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref Image
# A role needed by ECS
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['', [!Ref ServiceName, ExecutionRole]]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
# A role for the containers
TaskRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['', [!Ref ServiceName, TaskRole]]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
You would use a CloudWatch Event Input Transformer to extract the data you need from the event, and pass that data to the ECS task as environment variable(s) in the target's ContainerOverrides. I don't use CloudFormation, but here's an example using Terraform.
You can't. CloudWatch events do not pass data to ECS jobs. You need to develop your own mechanism for that. For example, trigger lambda first, store event in S3 Parameter Store or DynamoDB, and then invoke your ECS job which will get stored data.

CloudFormation - How can I reference a serverless usage plan?

Problem:
I want to associate an existing API key to a newly created "Usage Plan" which is created via AWS SAM as below:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
Name: MyAPI
OpenApiVersion: '2.0'
EndpointConfiguration:
Type: REGIONAL
StageName: prod
MethodSettings:
- ResourcePath: "/*"
HttpMethod: "*"
ThrottlingBurstLimit: 1
ThrottlingRateLimit: 1
Domain:
DomainName: api.mywebsite.com
CertificateArn: 'arn:aws:acm:eu-north-1:101010101010:certificate/88888888-4444-3333-2222-1111111111'
EndpointConfiguration: REGIONAL
Route53:
HostedZoneId: 'OMMITTD82737373'
BasePath:
- /
Auth:
UsagePlan:
CreateUsagePlan: PER_API
Description: Usage plan for this API
Quota:
Limit: 1000
Period: MONTH
Throttle:
BurstLimit: 1
RateLimit: 1
Tags:
- Key: Name
Value: MyUsagePlan
usagePlanKey:
Type: 'AWS::ApiGateway::UsagePlanKey'
Properties:
KeyId: 2672762828
KeyType: API_KEY
UsagePlanId: ????????????????????
Is it possible to reference the UsagePlanId here?
I tried: !Ref UsagePlan but no success...
Any ideas on how to do it?
Thanks
As far as I know, there is no way to reference the UsagePlan created as part of your Api.
However, you can create UsagePlan outside of ApiGatewayApi as a separate resource, and associate it with your ApiGatewayApi. Then you can easily reference it in your UsagePlanKey:
usagePlan:
Type: 'AWS::ApiGateway::UsagePlan'
Properties:
ApiStages:
- ApiId: !Ref ApiGatewayApi
Stage: prod
... (rest of the properties omitted)
usagePlanKey:
Type: 'AWS::ApiGateway::UsagePlanKey'
Properties:
KeyId: 2672762828
KeyType: API_KEY
UsagePlanId: !Ref usagePlan

Could not create AWS::ECS::Service via cloudformation yaml, got Model validation failed

During creation of AWS::ECS::Service via cloudformation i got the error: Model validation failed
The error is related to #HealthCheckGracePeriodSeconds and some other properties. Error detail is: expected type: Number, found: String.
In yaml it is already a number. It's not clear to me whats going wrong. Already tried to desclare it as string or as parameter with type Number.
I need some hint because i am stuck in the muck at this point.
Error is:
Model validation failed
(
#/HealthCheckGracePeriodSeconds: expected type: Number, found: String
#/DesiredCount: expected type: Number, found: String
#/DeploymentConfiguration/MaximumPercent: expected type: Number, found: String
#/DeploymentConfiguration/MinimumHealthyPercent: expected type: Number, found: String
)
Definition in template.yaml is:
ServiceDefinition:
Type: AWS::ECS::Service
Properties:
ServiceName: !Ref ServiceName
Cluster: !Ref ClusterName
TaskDefinition: !Ref TaskDefinition
DeploymentConfiguration:
MinimumHealthyPercent: 100
MaximumPercent: 200
DesiredCount: 1
HealthCheckGracePeriodSeconds: 60
LaunchType: FARGATE
NetworkConfiguration:
AwsVpcConfiguration:
AssignPublicIP: ENABLED
SecurityGroups: !FindInMap [Env2SecurityGroups, !Ref AWS::AccountId, securitygroup]
Subnets: !FindInMap [Env2PublicSubnets, !Ref AWS::AccountId, subnets]
The error was caused because SecurityGroups and Subnets resulted in a wrong format.
To extract subnets and securitygroups the FindInMap function was used. It is necessary that this result is a list. This can be achieved using the Split function.
The wrong format unfortunately leads to a completely misleading error message.
Declare mappings like this:
Mappings
Env2SecurityGroups:
'111111111111':
securitygroup: 'sg-1111111111111111'
'222222222222':
securitygroup: 'sg-2222222222222222'
'333333333333':
securitygroup: 'sg-3333333333333333'
Env2PublicSubnets:
'111111111111':
subnets: subnet-1111111111111111,subnet-22222222222222222,subnet-33333333333333333
'222222222222':
subnets: subnet-1111111111111111,subnet-22222222222222222,subnet-33333333333333333
'333333333333':
subnets: subnet-1111111111111111,subnet-22222222222222222,subnet-33333333333333333
Use !Split combined with !FindInMap to get a list:
SecurityGroups: !Split [",", !FindInMap [ Env2SecurityGroups, !Ref AWS::AccountId, securitygroup] ]
Subnets: !Split [",", !FindInMap [ Env2PublicSubnets, !Ref AWS::AccountId, subnets] ]

How to combine list from Fn::FindInMap with additional items?

I have next CloudFormation file:
Mappings:
MyMap:
us-east-1:
Roles:
- "roleA"
- "roleB"
...
Resources:
MyPolicy:
Type: "AWS::IAM::Policy"
PolicyDocument:
Statement:
- Effect: "Allow"
Action:
- "sts:AssumeRole"
Resource:
Fn::FindInMap: ["MyMap", !Ref AWS::Region, "Roles"]
Everything works fine, however now I need to add an extra role that would be defined for all regions, however simply adding additional role to Resource: section doesn't work, since it fails with template syntax error.
Is there a way to combine list of results from FindInMap and another item? Something like:
Resource:
Fn::FindInMap: ["MyMap", !Ref AWS::Region, "Roles"]
- "roleC"
Yes, you can, but it won't be pretty:
Resource:
Fn::Split:
- ','
- Fn::Join:
- ','
- - !Join [',', !FindInMap ["MyMap", !Ref "AWS::Region", "Roles"]]
- 'roleC'
Basically, first you join the MyMap list into a string, then you add roleC to the string, and then split it into List of Strings.

Encountered unsupported property AutoScalingReplacingUpdate

Encountered unsupported property AutoScalingReplacingUpdate error appears when trying to launch a stack that contains the following AWS::AutoScaling::AutoScalingGroup:
myAutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
CreationPolicy:
AutoScalingReplacingUpdate:
WillReplace: true
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
AvailabilityZones:
- eu-west-1a
- eu-west-1b
- eu-west-1c
VPCZoneIdentifier:
- 'Fn::ImportValue': !Sub '${vpcId1}'
- 'Fn::ImportValue': !Sub '${vpcId2}'
- 'Fn::ImportValue': !Sub '${vpcId3}'
MetricsCollection:
- Granularity: 1Minute
Metrics:
- GroupMinSize
- GroupMaxSize
- GroupInServiceInstances
- GroupPendingInstances
- GroupTerminatingInstances
MinSize: !Ref AutoScalingGroupWSMinSize
MaxSize: !Ref AutoScalingGroupWSMaxSize
LaunchConfigurationName: !Ref myLaunchConfig
TargetGroupARNs:
- !Ref myTargetGroup
I have found a (undesired) workaround for this but i really don't want to rely on it. The work around is the following:
comment out
CreationPolicy:
AutoScalingReplacingUpdate:
WillReplace: true
launch the template
update the successfully launched stack by uncommenting the above
lines
This is bad and i don't want to do it, since my goal is to automate my infrastructure.
The atribute CreationPolicy do not have the AutoScalingReplacingUpdate property
CreationPolicy:
AutoScalingCreationPolicy:
MinSuccessfulInstancesPercent: Integer
ResourceSignal:
Count: Integer
Timeout: String
The attribute UpdatePolicy is the one that does have the property AutoScalingReplacingUpdate:
UpdatePolicy:
AutoScalingReplacingUpdate:
WillReplace: Boolean