Following config is extracted from my serverless.yml
service: test-svc
provider:
name: aws
...
functions:
apiHandler:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
taskHandler:
handler: task.handler
events:
- sqs:
...
alexaHandler:
handler: alexa.handler
events:
- alexaSmartHome: ...
I want to deploy apiHandler and taskHandler function in only region-a
And deploy alexaHandler in region-b, region-c and region-d.
If I execute the command sls deploy --region us-east-1 all three functions will be deployed, but I don't need that. I need only 2 functions to be deployed.
Using sls deploy function is not an option because it only swaps zip file.
Putting alexaHandler in sub-directory with new serverless.yml didn't work because deployment only packs sub-directory and won't include code from the parent directory. (Many codes are shared between 3 function)
Any suggestion to deal with this requirement?
After going through all the serverless plugin list I found above requirement could be achieved through serverless-plugin-select
Using this plugin we can select to deploy only a few functions from serverless.yml depending on stage or region value. In my case using region value.
Following is modified serverless.yml. plugins section added and regions key added in each function.
service: test-svc
plugins:
- serverless-plugin-select
provider:
name: aws
...
functions:
apiHandler:
...
regions:
- us-west-2
taskHandler:
...
regions:
- us-west-2
alexaHandler:
...
regions:
- eu-west-1
- us-east-1
- us-west-2
With the above config, I use the following bash script to deploy for all region.
#!/usr/bin/env bash
serverless deploy --region eu-west-1
serverless deploy --region us-east-1
serverless deploy --region us-west-2
You can conditionally select values in serverless.yml by storing the conditional functions in a custom variable like
### serverless.yml
provider:
name: << aws or your provider >>
runtime: << your runtime, eg nodejs8.10 >>
region: << your aws region >>
stage: ${opt:stage, 'dev'}
custom:
extraCode:
dev:
testing: ${file(testing_only/testing_endpoints.yml)}
prod:
...
## and then at the functions section of serverless.yml
functions:
- ${file(functions/someFunctionsInAFile.yml)}
- ${file(functions/someMoreFunctions.yml)}
- ${self:custom.extraCode.${self:provider.stage}}
When you deploy serverless you should pass in the command line option --stage=myStageName so that when you pass in --stage=dev or --stage=prod the last line in the function section will be blank and nothing will deployed.
If you pass in --stage=testing the last line in the functions sections will be filled with the file set in your custom variable section and then your test code will be deployed.
Related
I wanted to create a CodePipeline which builds a container image from CodeCommit source and afterwards deploys the new image in Blue/Green fashion to my ECS service (EC2 launchtype).
The source stage is CodeCommit, which already includes appspec.json
as well as taskdef.json
The build stage is building the new
container & pushing it to ECR successfully, the file
imagedefinition.json is the BuildArtifact created at this step,
containing the container and the recently created image with its
tag corresponding to the CodeCommit commit-id.
The deploy stage
is made of action "Amazon ECS (Blue/Green)" using the
SourceArtifact and BuildArtifact as InputArtifacts, to take the
appspec and taskdef from the SourceArtifact and the image
description from the BuildArtifact, to finally deploy the new
container in Blue/Green manner.
The problem is with the image definition from the BuildArtifact. The pipeline fails in the Deploy phase with error:
""
Invalid action configuration
Exception while trying to read the image artifact file from the artifact: BuildArtifact.
""
How to properly configure the "Amazon ECS (Blue/Green)" deploy phase, so that it can use the recently created image and deploy it....by replacing placeholder IMAGE_NAME inside taskdef.json ?
Any hint highly appreciated :D
answering my own question here, hopefully it helps others who facing the same situation.
the file imagedefinitions.json is inappropriate for deploy action "Amazon ECS Blue/Green". For that you have to create file imageDetail.json within the build step and provide it as artifact to the deploy step. How ? This is how the bottom of my buildspec.yaml looks like:
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- 'image*.json'
- 'appspec.yaml'
- 'taskdef.json'
secondary-artifacts:
DefinitionArtifact:
files:
- appspec.yaml
- taskdef.json
ImageArtifact:
files:
- imageDetail.json
In the Deploy phase of CodePipeline, use DefinitionArtifact and ImageArtifact as Input Artifacts and configure them in the corresponding section "Amazon ECS task definition" and "AWS CodeDeploy AppSpec file".
Ensure that your appspec.yaml contains placeholder for the task definition. Here is my appspec.yaml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-test-container"
ContainerPort: 8000
Also ensure that your taskdef.json contains placeholder for the final image, like
...
"image": <IMAGE1_NAME>,
...
use that placeholder in the codepipeline config of your blue/green deploy phase in the section "Dynamically update task definition image - optional" by choosing the input artifact as "ImageArtifact" and the placeholder <IMAGE1_NAME>
Amazon ECS Blue/Green (or CodeDeployToECS) CodePipeline action requires the TaskDefinitionTemplateArtifact parameter (see [1]).
In addition to the above file note an imageDetail.json is required for ECS Blue/Green deployments (not 'imagedefinition.json'). The file structure and details are available here [2]. Add this file to the root of your deployment artifact/version control. If you do not want to add this file manually you can use the ECR source action to the CodePipeline and configure this with the Image you are using in the ECS service/taskdef.json. This is all discussed at [2] for clarity.
To see how this is all brought together you can also follow the step by step instructions for ECS Blue/Green deployments here [3].
References:
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements : CodePipeline Pipeline Structure Reference - Action Structure Requirements in CodePipeline
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/file-reference.html#file-reference-ecs-bluegreen : Image Definitions File Reference - imageDetail.json File for Amazon ECS Blue/Green Deployment Actions
[3] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html : Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment
I ran into the same problem.
tl:dr
I was not passing the correct input artefact with the imageDetail.json to the pipeline CodeDeployToECS action.
Summary:
Instead of checking in a version of the task definition with the '<IMAGE1_NAME>' placeholder, I'm dynamically generating the task definition input to CodeDeploy inside the pipeline.
The task definition early in the project is quite volatile, with new variables etc being passed to the container. It's generated and registered within the pipeline (Cloudformation) and then read out via a Codebuild project, substituting the image placeholder with '<IMAGE1_NAME>' and passed to the next stage in the pipeline via a pipeline artefact.
Fixing it:
I have a CodeBuild project within the pipeline that produces the imageDetail.json:
{"ImageURI":"########.dkr.ecr.eu-west-1.amazonaws.com/##/#####:2739511dd87d4e4e1f65ed69c9e779b63fb72e36-master-fbe73fdc-6213-4bd6-a784-dcc3d2ae7845"}
It's pipeline output is named 'BuildDockerOutput'
I have another Codebuild project that produces:
taskdef.json
{
"containerDefinitions": [
{
"name": "ronantest1",
"image": "<IMAGE1_NAME>",
]
}
appspec.json
{
"version": 0.0,
"Resources": [
{
"TargetService": {
"Type": "AWS::ECS::Service",
"Properties": {
"TaskDefinition": "<TASK_DEFINITION>",
"LoadBalancerInfo": {
"ContainerName": "ronantest1",
"ContainerPort": "8080"
}
}
}
}
],
"Hooks": [
{
"AfterAllowTestTraffic": "arn:aws:lambda:eu-west-1:######:function:code-deploy-after-allow-test-traffic"
}
]
}
It's pipeline output is named 'PrepareCodeDeployOutputTesting'
My final CodeDeploy action is like the following:
- Name: BlueGreenDeploy
InputArtifacts:
- Name: BuildDockerOutput
- Name: PrepareCodeDeployOutputTesting
Region: !Ref DeployRegion1
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: CodeDeployToECS
RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/######/CrossAccountsDeploymentRole
Configuration:
AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
AppSpecTemplatePath: appspec.json
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref ApplicationName
TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
TaskDefinitionTemplatePath: taskdef.json
Image1ArtifactName: BuildDockerOutput
Image1ContainerName: "IMAGE1_NAME"
RunOrder: 4
Note the different aspects of the CodeDeployToECS needed artefacts from different InputArtifacts, specifically 'Image1ArtifactName'
Thanks, to all, this gives me some light into solving the issue.
I would like to add that when you use aws cli, cloudformation, or Terraform to configure codepipeline, some parameters and options are not available with the console and setting some variables in these tools like the empty string "" will cause an exception error.
Always check for codepipeline settings in the console when you deploy using these tools.
so the error occur when you defined Image Artifact but not define the placeholder
imageDetail.json can be passed into codedeploy using the following methods:
git source ( codecommit or github ) the file that exist in your app codebase
ECR source - the file will be autogenerated by ECR, but will use SHA256 instead of the image tag
CodeBuild source - you update the file using codebuild buildspec.yml and pass it down to codedeploy stage.
Reading through this https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html
it sounds like you can only create a manual approval step through the UI Console or through CLI BUT NOT through CloudFormation Template?
Edgar
Actually, CloudFormation does support this.
You just need to set Provider for resource ActionTypeId (Pipeline -> Stage -> Action -> ActionTypeId) as Manual and that's it. More info about provider type - here.
Examle:
DeliveryPipeline:
Properties:
...
Stages:
...
- Actions:
- ActionTypeId:
Category: Approval
Owner: AWS
Provider: Manual
Version: '1'
Configuration:
NotificationArn: <<arn>>
InputArtifacts: []
Name: TestApproval
RunOrder: 1
Name: Development_Approval
...
Type: AWS::CodePipeline::Pipeline
I'm trying to setup a new repo and I keep getting the error
The CloudFormation template is invalid: Template error: instance of Fn::GetAtt
references undefined resource uatLambdaRole
in my uat stage, however the dev stage with the exact same format works fine.
I have a resource file for each of these environments.
dev
devLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: dev-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
uat
uatLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: uat-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
In my serverless.yml my role is defined as
role: ${self:custom.stage}LambdaRole
and the stage is set as
custom:
stage: ${opt:stage, self:provider.stage}
Running serverless deploy --stage dev --verbose succeeds, but running serverless deploy --stage uat --verbose fails with the error. Can anyone see what I'm doing wrong? The uat resource was copied directly from the dev one with only the stage name change.
Here is a screenshot of the directory the resource files are in
I had the same issue, eventually I discovered that my SQS queue name wasn't the same in all 3 places. The following 3 places that the SQS name should match are shown below:
...
functions:
mylambda:
handler: sqsHandler.handler
events:
- sqs:
arn:
Fn::GetAtt:
- mySqsName # <= Make sure that these match
- Arn
resources:
Resources:
mySqsName: # <= Make sure that these match
Type: "AWS::SQS::Queue"
Properties:
QueueName: "mySqsName" # <= Make sure that these match
FifoQueue: true
Ended up here with the same error message. My issue ended up being that I got the "resource" and "Resource" keys in serverless.yml backwards.
Correct:
resources: # <-- lowercase "r" first
Resources: # <-- uppercase "R" second
LambdaRole:
Type: AWS::IAM::Role
Properties:
...
🤦‍♂️
I missed copying a key part of my config here, the actual reference to my Resources file
resources:
Resources: ${file(./serverless-resources/${self:provider.stage}-resources.yml)}
The issue was that I had copied this from a guide and had accientally used self:provider.stage rather than self:custom.stage. When I changed this, it could then deploy.
Indentation Issue
In general, when YAML isn't working I start by checking the indentation.
I hit this issue in my case one of my resources was indented too much, therefore, putting the resource in the wrong node/object. The resources should be two indents in as they're in node resources sub-node Resources
For more info on this see yaml docs
I am trying to use nested stack and when my ChangeSet is being executed, I got this error:
Requires capabilities : [CAPABILITY_AUTO_EXPAND]
I went and create a pipeline with cloudformation.
This can be use to create a pipeline:
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: changeset
RoleArn: ??
Capabilities: CAPABILITY_IAM
StackName: appsync-graphql
TemplatePath: BuildArtifact::output.yaml
This can’t:
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: changeset
RoleArn: ??
Capabilities:
- CAPABILITY_IAM
- CAPABILITY_AUTO_EXPAND
StackName: appsync-graphql
TemplatePath: BuildArtifact::output.yaml
The error was: “Value of property Configuration must be an object with String (or simple type) properties”
This is the closest docs that I find: https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStack.html
It said: Type: Array of strings for capabilites, and the aws cli docs says similarly, but doesn’t give an example.
So I ran out of ideas about what else to try to have CAPABILITY_AUTO_EXPAND capability.
I tried another variant and it worked!
Configuration:
..
Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
...
I got the answer from Keeton Hodgson, this cli command works:
sam deploy --template-file output.yaml --stack-name <AppName> --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND
Notice that there is no comma.
I still don't know how to change the pipeline template for it to work.
I tried the solutions above and what worked for me today (June 2020) using the higher level sam was adding a space between the capabilities listed. It's complete insanity that there's no resilience in this text file interpretation. SAM's cli is open source so I guess I could put my code where my mouth is and submit a PR. Anyway.
samconfig.toml:
...
capabilities = "CAPABILITY_IAM CAPABILITY_AUTO_EXPAND"
...
Then:
sam deploy
Output:
...
Capabilities : ["CAPABILITY_IAM", "CAPABILITY_AUTO_EXPAND"]
...
Put the capabilities property at the very end like this
aws cloud formation deploy COMMAND --capabilities CAPABILITY_NAMED_IAM
Change the order
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: changeset
RoleArn: ??
StackName: appsync-graphql
TemplatePath: BuildArtifact::output.yaml
Capabilities:
- CAPABILITY_IAM
- CAPABILITY_AUTO_EXPAND
After some research found that you can actually add those capabilities in console.
Reference Capabilities - optional section in the cfn deploy phase definition in console
I have a serverless project that is creating an API Gateway API amongst other things. One of the functions in the project needs to generate a URL for an API endpoint.
My plan is to get the API ID using a resource output in serverless.yml then create the URL and pass it through to the lambda function as an env parameter.
My problem/question is how to get the API ID as a cloud formation output in serverless.yml?
I've tried:
resources:
Outputs:
RESTApiId:
Description: The id of the API created in the API gateway
Value:
Ref: name-of-api
but this give the error:
The CloudFormation template is invalid: Unresolved resource dependencies [name-of-api] in the Outputs block of the template
You can write something like this in the serverless.yml file:
provider:
region: ${opt:region, 'eu-west-1'}
stage: ${opt:stage, 'dev'}
environment:
REST_API_URL:
Fn::Join:
- ""
- - "https://"
- Ref: "ApiGatewayRestApi"
- ".execute-api."
- ${self:provider.region}
- Ref: "AWS::URLSuffix"
- "/"
- ${self:provider.stage}"
Now you can call serverless with optional commandline options --stage and/or --region to override the defaults defined above, e.g:
serverless deploy --stage production --region us-east-1
In your code you can then use the environment variable REST_API_URL
node.js:
const restApiUrl = process.env.REST_API_URL;
python:
import os
rest_api_url = os.environ['REST_API_URL']
Java:
String restApiUrl = System.getenv("REST_API_URL");
The serverless framework has a documentation page on how they generate names for resources.
See. AWS CloudFormation Resource Reference
So the generated RestAPI resource is called ApiGatewayRestApi.
Unfortunately, the documentation doesn't mention it:
resources:
Outputs:
apiGatewayHttpApiId:
Value:
Ref: HttpApi
Export:
Name: YourAppHttpApiId