I'm trying to compose CF template to deploy a serverless system consisting of several Lambdas. In my case, Lambda resource descriptions share a lot of properties; the only difference is filename and handler function.
How can I define something like common set of parameters in my template?
This boilerplate is awful:
LambdaCreateUser:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: create_user.zip
Handler: create_user.lambda_handler
Runtime: python3.7
Role:
Fn::GetAtt: [ LambdaRole , "Arn" ]
Environment:
Variables: { "EnvTable": !Ref EnvironmentTable, "UsersTable": !Ref UsersTable }
LambdaDeleteUser:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: delete_user.zip
Handler: delete_user.lambda_handler
Runtime: python3.7
Role:
Fn::GetAtt: [ LambdaRole , "Arn" ]
Environment:
Variables: { "EnvTable": !Ref EnvironmentTable, "UsersTable": !Ref UsersTable }
What you're looking for is AWS SAM which is a layer of syntactic sugar on top on CloudFormation. A basic representation of your template with AWS SAM would look like this:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Globals:
Function:
Runtime: python3.7
Environment:
Variables:
EnvTable: !Ref EnvironmentTable
UsersTable: !Ref UsersTable
Resources:
LambdaCreateUser:
Type: AWS::Serverless::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: create_user.zip
Handler: create_user.lambda_handler
Role: !GetAtt LambdaRole.Arn
LambdaDeleteUser:
Type: AWS::Serverless::Function
Properties:
Code:
S3Bucket:
Ref: BucketForLambdas
S3Key: delete_user.zip
Handler: delete_user.lambda_handler
Role: !GetAtt LambdaRole.Arn
But that's not the end. You can replace the code definition with a path to your code or even inline code and use sam build and sam package to build and upload your artifacts. You can also probably replace the role definition with SAM policy templates for further reduction of boilerplate code.
Related
I have create a fargate task and trying to trigger it via s3 object creation event ( see sample below) via cloudformation.as it cannot trigger it directly, i have created a cloudwatchevent. I am trying to pass the bucket and obj name to my fargate task code . doing some research, i came across -> InputTransformer, but i'm not sure how to pass the value of my bucket and key name and how to read it in my python code. any help will be appreciated.
AWSTemplateFormatVersion: 2010-09-09
Description: An example CloudFormation template for Fargate.
Parameters:
VPC:
Type: AWS::EC2::VPC::Id
SubnetA:
Type: AWS::EC2::Subnet::Id
SubnetB:
Type: AWS::EC2::Subnet::Id
Image:
Type: String
Default: 123456789012.dkr.ecr.region.amazonaws.com/image:tag
Resources:
mybucket:
Properties:
BucketName: 'mytestbucket-us'
cloudwatchEvent:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail:
eventSource:
- s3.amazonaws.com
eventName:
- PutObject
- CompleteMultipartUpload
requestParameters:
bucketName:
- !Ref mybucket
Targets:
- Id: my-fargate-task
Arn: myclusterArn
RoleArn: myinvocationrolearn
Input:
'Fn::Sub':
- >-
{"containerOverrides": [{"name":"somecontainer"]}
EcsParameters:
TaskDefinition:
LaunchType: 'FARGATE'
...
NetworkConfiguration:
...
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Join ['', [!Ref ServiceName, Cluster]]
TaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: LogGroup
Properties:
Family: !Join ['', [!Ref ServiceName, TaskDefinition]]
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Cpu: 256
Memory: 2GB
ExecutionRoleArn: !Ref ExecutionRole
TaskRoleArn: !Ref TaskRole
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref Image
# A role needed by ECS
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['', [!Ref ServiceName, ExecutionRole]]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
# A role for the containers
TaskRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['', [!Ref ServiceName, TaskRole]]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
You would use a CloudWatch Event Input Transformer to extract the data you need from the event, and pass that data to the ECS task as environment variable(s) in the target's ContainerOverrides. I don't use CloudFormation, but here's an example using Terraform.
You can't. CloudWatch events do not pass data to ECS jobs. You need to develop your own mechanism for that. For example, trigger lambda first, store event in S3 Parameter Store or DynamoDB, and then invoke your ECS job which will get stored data.
Problem:
I want to associate an existing API key to a newly created "Usage Plan" which is created via AWS SAM as below:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
Name: MyAPI
OpenApiVersion: '2.0'
EndpointConfiguration:
Type: REGIONAL
StageName: prod
MethodSettings:
- ResourcePath: "/*"
HttpMethod: "*"
ThrottlingBurstLimit: 1
ThrottlingRateLimit: 1
Domain:
DomainName: api.mywebsite.com
CertificateArn: 'arn:aws:acm:eu-north-1:101010101010:certificate/88888888-4444-3333-2222-1111111111'
EndpointConfiguration: REGIONAL
Route53:
HostedZoneId: 'OMMITTD82737373'
BasePath:
- /
Auth:
UsagePlan:
CreateUsagePlan: PER_API
Description: Usage plan for this API
Quota:
Limit: 1000
Period: MONTH
Throttle:
BurstLimit: 1
RateLimit: 1
Tags:
- Key: Name
Value: MyUsagePlan
usagePlanKey:
Type: 'AWS::ApiGateway::UsagePlanKey'
Properties:
KeyId: 2672762828
KeyType: API_KEY
UsagePlanId: ????????????????????
Is it possible to reference the UsagePlanId here?
I tried: !Ref UsagePlan but no success...
Any ideas on how to do it?
Thanks
As far as I know, there is no way to reference the UsagePlan created as part of your Api.
However, you can create UsagePlan outside of ApiGatewayApi as a separate resource, and associate it with your ApiGatewayApi. Then you can easily reference it in your UsagePlanKey:
usagePlan:
Type: 'AWS::ApiGateway::UsagePlan'
Properties:
ApiStages:
- ApiId: !Ref ApiGatewayApi
Stage: prod
... (rest of the properties omitted)
usagePlanKey:
Type: 'AWS::ApiGateway::UsagePlanKey'
Properties:
KeyId: 2672762828
KeyType: API_KEY
UsagePlanId: !Ref usagePlan
I have a cloudformation template that works as expected. It install python lambda function.
https://github.com/shantanuo/easyboto/blob/master/install_lambda.txt
But how do I run the function once every day? I know the yaml code will look something like this...
NotifierLambdaScheduledRule:
Type: AWS::Events::Rule
Properties:
Name: 'notifier-scheduled-rule'
Description: 'Triggers notifier lambda once per day'
ScheduleExpression: cron(0 7 ? * * *)
State: ENABLED
In other words, how do I integrate cron setting in my cloudformation template?
An example of CloudFormation template I use:
# Cronjobs
## Create your Lambda
CronjobsFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: FUNCTION_NAME
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
S3Bucket: !Sub ${S3BucketName}
S3Key: !Sub ${LambdasFileName}
Runtime: nodejs8.10
MemorySize: 512
Timeout: 300
## Create schedule
CronjobsScheduledRule:
Type: AWS::Events::Rule
Properties:
Description: Scheduled Rule
ScheduleExpression: cron(0 7 ? * * *)
# ScheduleExpression: rate(1 day)
State: ENABLED
Targets:
- Arn: !GetAtt CronjobsFunction.Arn
Id: TargetFunctionV1
## Grant permission to Events trigger Lambda
PermissionForEventsToInvokeCronjobsFunction:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref CronjobsFunction
Action: lambda:InvokeFunction
Principal: events.amazonaws.com
SourceArn: !GetAtt CronjobsScheduledRule.Arn
## Create Logs to check if events are working
CronjobsFunctionLogsGroup:
Type: AWS::Logs::LogGroup
DependsOn: CronjobsFunction
DeletionPolicy: Delete
Properties:
LogGroupName: !Join ['/', ['/aws/lambda', !Ref CronjobsFunction]]
RetentionInDays: 14
You can check about Rate and Cron Expressions here.
But, if you want to run the above job once a day at 07:00 AM (UTC), then the expression should probably be: cron(0 7 * * ? *)
Others can provide you with a working example with Lambda without Serverless. But if you are using Serverless Transform with AWS Cloudformation (Basically SAM - Serverless Application Model), you can schedule your lambda pretty easily.
For example:
ServerlessTestLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: src
Handler: test-env-var.handler
Role: !GetAtt BasicLambdaRole.Arn
Environment:
Variables:
Var1: "{{resolve:ssm:/test/ssmparam:3}}"
Var2: "Whatever You want"
Events:
LambdaSchedule:
Type: Schedule
Properties:
Schedule: rate(1 day)
This lambda would trigger itself every day.
More information: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#schedule
If I have some parameter like
Parameters:
Owner:
Description: Enter Team or Individual Name Responsible for the Stack.
Type: String
Default: Name
Project:
Description: Enter Project Name.
Type: String
Default: Whatever
is there a way to reference them both like:
Resources:
Resource:
Properties:
Name: !Ref Owner- !Sub ${Project}
merci A
You can do
Name: !Sub "${Owner}-${Project}"
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html#w2ab1c21c28c59b7
You can use:
Name: !Join [ '-', [!Ref Owner, !Ref Project] ]
Which will generate something like ownerX-projectY
Description:
I am trying to define Serverless API resource. But having trouble in defining location of swagger specification file using function ImportValue.
Steps to reproduce the issue:
I am not able to define AWS::Serverless::Api resource having nested function ImportValue in Location. I have tried following three ways, none of them work.
Note: Stack parameters are defined properly and export value from other stack exists. Not showing them here for brevity reason.
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
Name: !Sub ${AWS::StackName}-API
StageName: !Ref ApiGatewayStageName
DefinitionBody:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location:
Fn::Sub:
- s3://${BucketName}/${SwaggerSpecificationS3Key}
- BucketName:
Fn::ImportValue:
!Sub "${EnvironmentName}-dist-bucket-${AWS::Region}"
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
Name: !Sub ${AWS::StackName}-API
StageName: !Ref ApiGatewayStageName
DefinitionBody:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location:
Fn::Sub:
- s3://${BucketName}/${SwaggerSpecificationS3Key}
- BucketName:
!ImportValue 'dev-dist-bucket-us-east-1'
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
Name: !Sub ${AWS::StackName}-API
StageName: !Ref ApiGatewayStageName
DefinitionBody:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location:
Fn::Sub:
- s3://${BucketName}/${SwaggerSpecificationS3Key}
- BucketName:
Fn::ImportValue: 'dev-dist-bucket-us-east-1'
Cloudformation shows following error.
FAILED - The value of parameter Location under transform Include must
resolve to a string, number, boolean or a list of any of these.
However, if I do not use ImportValue it works with a nested Fn::Sub
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
Name: !Sub ${AWS::StackName}-API
StageName: !Ref ApiGatewayStageName
DefinitionBody:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location:
Fn::Sub:
- s3://${BucketName}/${SwaggerSpecificationS3Key}
- BucketName:
Fn::Sub: dist-bucket-${EnvironmentName}-${AWS::Region}
Is it because of Fn::Transform or AWS::Include?