Updating only one function in serverless - aws-cloudformation

I have a serverless.yml file with 3 functions.
When I run serverless deploy it updates all 3 functions, which takes a lot of time.
Is there a way I can update just 1 function without affecting the rest of the stack (i.e., not updating the cloud stack)?
I tried serverless deploy -f functionA but it seems to still process and upload the other functions.
service: myrestapi
provider:
name: aws
region: 'us-east-1'
runtime: nodejs14.x
functions:
functionA:
handler: functions/functionA.handler
memorySize: 256
functionB:
handler: functions/functionB.handler
memorySize: 128
functionC:
handler: functions/functionC.handler
memorySize: 128

why do you think it's deploying the full stack and not only the function ?
Output says it's only deploy's the function A, isn't it ?
$ serverless deploy
Running "serverless" from node_modules
Deploying myrestapi to stage dev (us-east-1)
✔ Service deployed to stack myrestapi-dev (128s)
functions:
functionA: myrestapi-dev-functionA (30 MB)
functionB: myrestapi-dev-functionB (30 MB)
functionC: myrestapi-dev-functionC (30 MB)
$ serverless deploy -f functionA
Running "serverless" from node_modules
Deploying function functionA to stage dev (us-east-1)
✔ Function code deployed (17s)
Configuration did not change. Configuration update skipped. (17s)
Perhaps, you're looking for
package:
individually: true
and include only the files needed for your function
service: myrestapi
provider:
name: aws
region: 'us-east-1'
runtime: nodejs14.x
package:
individually: true
patterns:
- '!node_modules/**'
- '!functions/**'
functions:
functionA:
handler: functions/functionA.handler
memorySize: 256
package:
patterns:
- 'node_modules/node-fetch/**'
- functions/functionA.js
functionB:
handler: functions/functionB.handler
memorySize: 128
package:
patterns:
- functions/functionB.js
functionC:
handler: functions/functionC.handler
memorySize: 128
package:
patterns:
- 'node_modules/node-fetch/**'
- functions/functionC.js

Related

AWS ECS Blue/Green CodePipeline: Exception while trying to read the image artifact

I wanted to create a CodePipeline which builds a container image from CodeCommit source and afterwards deploys the new image in Blue/Green fashion to my ECS service (EC2 launchtype).
The source stage is CodeCommit, which already includes appspec.json
as well as taskdef.json
The build stage is building the new
container & pushing it to ECR successfully, the file
imagedefinition.json is the BuildArtifact created at this step,
containing the container and the recently created image with its
tag corresponding to the CodeCommit commit-id.
The deploy stage
is made of action "Amazon ECS (Blue/Green)" using the
SourceArtifact and BuildArtifact as InputArtifacts, to take the
appspec and taskdef from the SourceArtifact and the image
description from the BuildArtifact, to finally deploy the new
container in Blue/Green manner.
The problem is with the image definition from the BuildArtifact. The pipeline fails in the Deploy phase with error:
""
Invalid action configuration
Exception while trying to read the image artifact file from the artifact: BuildArtifact.
""
How to properly configure the "Amazon ECS (Blue/Green)" deploy phase, so that it can use the recently created image and deploy it....by replacing placeholder IMAGE_NAME inside taskdef.json ?
Any hint highly appreciated :D
answering my own question here, hopefully it helps others who facing the same situation.
the file imagedefinitions.json is inappropriate for deploy action "Amazon ECS Blue/Green". For that you have to create file imageDetail.json within the build step and provide it as artifact to the deploy step. How ? This is how the bottom of my buildspec.yaml looks like:
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- 'image*.json'
- 'appspec.yaml'
- 'taskdef.json'
secondary-artifacts:
DefinitionArtifact:
files:
- appspec.yaml
- taskdef.json
ImageArtifact:
files:
- imageDetail.json
In the Deploy phase of CodePipeline, use DefinitionArtifact and ImageArtifact as Input Artifacts and configure them in the corresponding section "Amazon ECS task definition" and "AWS CodeDeploy AppSpec file".
Ensure that your appspec.yaml contains placeholder for the task definition. Here is my appspec.yaml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-test-container"
ContainerPort: 8000
Also ensure that your taskdef.json contains placeholder for the final image, like
...
"image": <IMAGE1_NAME>,
...
use that placeholder in the codepipeline config of your blue/green deploy phase in the section "Dynamically update task definition image - optional" by choosing the input artifact as "ImageArtifact" and the placeholder <IMAGE1_NAME>
Amazon ECS Blue/Green (or CodeDeployToECS) CodePipeline action requires the TaskDefinitionTemplateArtifact parameter (see [1]).
In addition to the above file note an imageDetail.json is required for ECS Blue/Green deployments (not 'imagedefinition.json'). The file structure and details are available here [2]. Add this file to the root of your deployment artifact/version control. If you do not want to add this file manually you can use the ECR source action to the CodePipeline and configure this with the Image you are using in the ECS service/taskdef.json. This is all discussed at [2] for clarity.
To see how this is all brought together you can also follow the step by step instructions for ECS Blue/Green deployments here [3].
References:
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements : CodePipeline Pipeline Structure Reference - Action Structure Requirements in CodePipeline
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/file-reference.html#file-reference-ecs-bluegreen : Image Definitions File Reference - imageDetail.json File for Amazon ECS Blue/Green Deployment Actions
[3] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html : Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment
I ran into the same problem.
tl:dr
I was not passing the correct input artefact with the imageDetail.json to the pipeline CodeDeployToECS action.
Summary:
Instead of checking in a version of the task definition with the '<IMAGE1_NAME>' placeholder, I'm dynamically generating the task definition input to CodeDeploy inside the pipeline.
The task definition early in the project is quite volatile, with new variables etc being passed to the container. It's generated and registered within the pipeline (Cloudformation) and then read out via a Codebuild project, substituting the image placeholder with '<IMAGE1_NAME>' and passed to the next stage in the pipeline via a pipeline artefact.
Fixing it:
I have a CodeBuild project within the pipeline that produces the imageDetail.json:
{"ImageURI":"########.dkr.ecr.eu-west-1.amazonaws.com/##/#####:2739511dd87d4e4e1f65ed69c9e779b63fb72e36-master-fbe73fdc-6213-4bd6-a784-dcc3d2ae7845"}
It's pipeline output is named 'BuildDockerOutput'
I have another Codebuild project that produces:
taskdef.json
{
"containerDefinitions": [
{
"name": "ronantest1",
"image": "<IMAGE1_NAME>",
]
}
appspec.json
{
"version": 0.0,
"Resources": [
{
"TargetService": {
"Type": "AWS::ECS::Service",
"Properties": {
"TaskDefinition": "<TASK_DEFINITION>",
"LoadBalancerInfo": {
"ContainerName": "ronantest1",
"ContainerPort": "8080"
}
}
}
}
],
"Hooks": [
{
"AfterAllowTestTraffic": "arn:aws:lambda:eu-west-1:######:function:code-deploy-after-allow-test-traffic"
}
]
}
It's pipeline output is named 'PrepareCodeDeployOutputTesting'
My final CodeDeploy action is like the following:
- Name: BlueGreenDeploy
InputArtifacts:
- Name: BuildDockerOutput
- Name: PrepareCodeDeployOutputTesting
Region: !Ref DeployRegion1
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: CodeDeployToECS
RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/######/CrossAccountsDeploymentRole
Configuration:
AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
AppSpecTemplatePath: appspec.json
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref ApplicationName
TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
TaskDefinitionTemplatePath: taskdef.json
Image1ArtifactName: BuildDockerOutput
Image1ContainerName: "IMAGE1_NAME"
RunOrder: 4
Note the different aspects of the CodeDeployToECS needed artefacts from different InputArtifacts, specifically 'Image1ArtifactName'
Thanks, to all, this gives me some light into solving the issue.
I would like to add that when you use aws cli, cloudformation, or Terraform to configure codepipeline, some parameters and options are not available with the console and setting some variables in these tools like the empty string "" will cause an exception error.
Always check for codepipeline settings in the console when you deploy using these tools.
so the error occur when you defined Image Artifact but not define the placeholder
imageDetail.json can be passed into codedeploy using the following methods:
git source ( codecommit or github ) the file that exist in your app codebase
ECR source - the file will be autogenerated by ECR, but will use SHA256 instead of the image tag
CodeBuild source - you update the file using codebuild buildspec.yml and pass it down to codedeploy stage.

Can CloudFormation Create a PipeLine Manual Approval Action through Template?

Reading through this https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html
it sounds like you can only create a manual approval step through the UI Console or through CLI BUT NOT through CloudFormation Template?
Edgar
Actually, CloudFormation does support this.
You just need to set Provider for resource ActionTypeId (Pipeline -> Stage -> Action -> ActionTypeId) as Manual and that's it. More info about provider type - here.
Examle:
DeliveryPipeline:
Properties:
...
Stages:
...
- Actions:
- ActionTypeId:
Category: Approval
Owner: AWS
Provider: Manual
Version: '1'
Configuration:
NotificationArn: <<arn>>
InputArtifacts: []
Name: TestApproval
RunOrder: 1
Name: Development_Approval
...
Type: AWS::CodePipeline::Pipeline

Serverless conditional function deployment by region

Following config is extracted from my serverless.yml
service: test-svc
provider:
name: aws
...
functions:
apiHandler:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
taskHandler:
handler: task.handler
events:
- sqs:
...
alexaHandler:
handler: alexa.handler
events:
- alexaSmartHome: ...
I want to deploy apiHandler and taskHandler function in only region-a
And deploy alexaHandler in region-b, region-c and region-d.
If I execute the command sls deploy --region us-east-1 all three functions will be deployed, but I don't need that. I need only 2 functions to be deployed.
Using sls deploy function is not an option because it only swaps zip file.
Putting alexaHandler in sub-directory with new serverless.yml didn't work because deployment only packs sub-directory and won't include code from the parent directory. (Many codes are shared between 3 function)
Any suggestion to deal with this requirement?
After going through all the serverless plugin list I found above requirement could be achieved through serverless-plugin-select
Using this plugin we can select to deploy only a few functions from serverless.yml depending on stage or region value. In my case using region value.
Following is modified serverless.yml. plugins section added and regions key added in each function.
service: test-svc
plugins:
- serverless-plugin-select
provider:
name: aws
...
functions:
apiHandler:
...
regions:
- us-west-2
taskHandler:
...
regions:
- us-west-2
alexaHandler:
...
regions:
- eu-west-1
- us-east-1
- us-west-2
With the above config, I use the following bash script to deploy for all region.
#!/usr/bin/env bash
serverless deploy --region eu-west-1
serverless deploy --region us-east-1
serverless deploy --region us-west-2
You can conditionally select values in serverless.yml by storing the conditional functions in a custom variable like
### serverless.yml
provider:
name: << aws or your provider >>
runtime: << your runtime, eg nodejs8.10 >>
region: << your aws region >>
stage: ${opt:stage, 'dev'}
custom:
extraCode:
dev:
testing: ${file(testing_only/testing_endpoints.yml)}
prod:
...
## and then at the functions section of serverless.yml
functions:
- ${file(functions/someFunctionsInAFile.yml)}
- ${file(functions/someMoreFunctions.yml)}
- ${self:custom.extraCode.${self:provider.stage}}
When you deploy serverless you should pass in the command line option --stage=myStageName so that when you pass in --stage=dev or --stage=prod the last line in the function section will be blank and nothing will deployed.
If you pass in --stage=testing the last line in the functions sections will be filled with the file set in your custom variable section and then your test code will be deployed.

Serverless CloudFormation template error instance of Fn::GetAtt references undefined resource

I'm trying to setup a new repo and I keep getting the error
The CloudFormation template is invalid: Template error: instance of Fn::GetAtt
references undefined resource uatLambdaRole
in my uat stage, however the dev stage with the exact same format works fine.
I have a resource file for each of these environments.
dev
devLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: dev-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
uat
uatLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: uat-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
In my serverless.yml my role is defined as
role: ${self:custom.stage}LambdaRole
and the stage is set as
custom:
stage: ${opt:stage, self:provider.stage}
Running serverless deploy --stage dev --verbose succeeds, but running serverless deploy --stage uat --verbose fails with the error. Can anyone see what I'm doing wrong? The uat resource was copied directly from the dev one with only the stage name change.
Here is a screenshot of the directory the resource files are in
I had the same issue, eventually I discovered that my SQS queue name wasn't the same in all 3 places. The following 3 places that the SQS name should match are shown below:
...
functions:
mylambda:
handler: sqsHandler.handler
events:
- sqs:
arn:
Fn::GetAtt:
- mySqsName # <= Make sure that these match
- Arn
resources:
Resources:
mySqsName: # <= Make sure that these match
Type: "AWS::SQS::Queue"
Properties:
QueueName: "mySqsName" # <= Make sure that these match
FifoQueue: true
Ended up here with the same error message. My issue ended up being that I got the "resource" and "Resource" keys in serverless.yml backwards.
Correct:
resources: # <-- lowercase "r" first
Resources: # <-- uppercase "R" second
LambdaRole:
Type: AWS::IAM::Role
Properties:
...
🤦‍♂️
I missed copying a key part of my config here, the actual reference to my Resources file
resources:
Resources: ${file(./serverless-resources/${self:provider.stage}-resources.yml)}
The issue was that I had copied this from a guide and had accientally used self:provider.stage rather than self:custom.stage. When I changed this, it could then deploy.
Indentation Issue
In general, when YAML isn't working I start by checking the indentation.
I hit this issue in my case one of my resources was indented too much, therefore, putting the resource in the wrong node/object. The resources should be two indents in as they're in node resources sub-node Resources
For more info on this see yaml docs

Seeing `permission: denied` errors in task when upgrading to version 3 of concourse

We just upgraded one of concourses to 3.3.0 and we're getting a weird error on one of our jobs.
runc create: exit status 1: container_linux.go:264: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:56: mounting \\\"/var/vcap/data/baggageclaim/volumes/live/17c7c6fb-a294-4274-4d3c-99d14980ab4f/volume\\\" to rootfs \\\"/var/vcap/data/garden/graph/aufs/mnt/9985cede6b6b24ac198ea4a6b252fcaa56eb1f0062cf102e9d45f293ec82ee9d\\\" at \\\"/var/vcap/data/garden/graph/aufs/mnt/9985cede6b6b24ac198ea4a6b252fcaa56eb1f0062cf102e9d45f293ec82ee9d/scratch\\\" caused \\\"mkdir /var/vcap/data/garden/graph/aufs/mnt/9985cede6b6b24ac198ea4a6b252fcaa56eb1f0062cf102e9d45f293ec82ee9d/scratch: permission denied\\\"\""
The configuration for the task is
- task: create-release
config:
platform: linux
run:
path: echo
As of version 3.0 of concourse and above, you must specify an image_resource in every task configuration.
Not having an image (now deprecated and renamed rootfs_uri) or image_resource defined in your task configuration used to be undocumented and unspecified behavior where it would let Garden choose the image based on Garden's default.
Try a task that looks like
- task: create-release
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
run:
path: echo