How to automate deployment to ECS Fargate when new image is pushed to ECR repository - amazon-ecs

Firstly, this is specific to CDK - I know there are plenty of questions/answers around this topic out there but none of them are CDK specific.
Given that best practices dictate that a Fargate deployment shouldn't look for the 'latest' tag in an ECR repository, how could one set up a CDK pipeline when using ECR as a source?
In a multi-repository application where each service is in it's own repository (where those repositories would have their own CDK CodeBuild deployments to set up building and pushing to ECR), how would the infrastructure CDK pipeline be aware of new images being pushed to an ECR repository and be able to deploy that new image to the ECS Fargate service?
Since a task definition has to specify an image tag (else it'll look for 'latest' which may not exist), this seems to be impossible.
As a concrete example, say I have the following 2 repositories:
CdkInfra
One of these repositories would be created for each customer to create the full environment for their application
SomeService
Actual application code
Only one of this repository should exist and re-used by multiple CdkInfra projects
cdk directory defining the CodeBuild project so when a push to master is detected, the service is built and the image pushed to ECR
The expected workflow would be as such:
SomeService repository is updated and so a new image is pushed to ECR
The CdkInfra pipeline should detect that a tracked ECR repository has a new image
The CdkInfra pipeline updates the Fargate task definition to reference the new image's tag
The Fargate service pulls the new image and deploys it
I know there is currently a limit with CodeDeploy not supporting ECS deployments due to CFN not supporting them, but it seems that CodePipelineActions has the ability to set up an EcrSourceAction which may be able to achieve this, however I've been unable to get this to work so far.
Is this possible at all, or am I stuck waiting until CFN support ECS CodeDeploy functionality?

You could store the name of the latest tag in an AWS Systems Manager (SSM) parameter (see the list here), and dynamically update it when you deploy new images to ECR.
Then, you could use the AWS SDK to fetch the value of the parameter during your CDK deploy, and then pass that value to your Fargate deployment.
The following CDK stack written in Python uses the value of the YourSSMParameterName parameter (in my AWS account) as the name of an S3 bucket:
from aws_cdk import (
core as cdk
aws_s3 as s3
)
import boto3
class MyStack(cdk.Stack):
def __init__(self, scope, construct_id, **kwargs):
super().__init__(scope, construct_id, **kwargs)
ssm = boto3.client('ssm')
res = ssm.get_parameter(Name='YourSSMParameterName')
name = res['Parameter']['Value']
s3.Bucket(
self, '...',
bucket_name=name,
)
I tested that and it worked beautifully.

Alright so after some hackery I've managed to do this.
Firstly, the service itself (in this case it's a Spring Boot project) gets a cdk directory in it's root. This basically just sets up the CI part of the CI/CD pipeline:
const appName: string = this.node.tryGetContext('app-name');
const ecrRepo = new ecr.Repository(this, `${appName}Repository`, {
repositoryName: appName,
imageScanOnPush: true,
removalPolicy: cdk.RemovalPolicy.DESTROY,
});
const bbSource = codebuild.Source.bitBucket({
// BitBucket account
owner: 'mycompany',
// Name of the repository this project belongs to
repo: 'reponame',
// Enable webhook
webhook: true,
// Configure so webhook only fires when the master branch has an update to any code other than this CDK project (e.g. Spring source only)
webhookFilters: [codebuild.FilterGroup.inEventOf(codebuild.EventAction.PUSH).andBranchIs('master').andFilePathIsNot('./cdk/*')],
});
const buildSpec = {
version: '0.2',
phases: {
pre_build: {
// Get the git commit hash that triggered this build
commands: ['env', 'export TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}'],
},
build: {
commands: [
// Build Java project
'./mvnw clean install -Dskiptests',
// Log in to ECR repository that contains the Corretto image
'aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 489478819445.dkr.ecr.us-west-2.amazonaws.com',
// Build docker images and tag them with the commit hash as well as 'latest'
'docker build -t $ECR_REPO_URI:$TAG -t $ECR_REPO_URI:latest .',
// Log in to our own ECR repository to push
'$(aws ecr get-login --no-include-email)',
// Push docker images to ECR repository defined above
'docker push $ECR_REPO_URI:$TAG',
'docker push $ECR_REPO_URI:latest',
],
},
post_build: {
commands: [
// Prepare the image definitions artifact file
'printf \'[{"name":"servicename","imageUri":"%s"}]\' $ECR_REPO_URI:$TAG > imagedefinitions.json',
'pwd; ls -al; cat imagedefinitions.json',
],
},
},
// Define the image definitions artifact - is required for deployments by other CDK projects
artifacts: {
files: ['imagedefinitions.json'],
},
};
const buildProject = new codebuild.Project(this, `${appName}BuildProject`, {
projectName: appName,
source: bbSource,
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
environmentVariables: {
// Required for tagging/pushing image
ECR_REPO_URI: { value: ecrRepo.repositoryUri },
},
},
buildSpec: codebuild.BuildSpec.fromObject(buildSpec),
});
!!buildProject.role &&
buildProject.role.addToPrincipalPolicy(
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['ecr:*'],
resources: ['*'],
}),
);
Once this is set up, the CodeBuild project has to be built manually once so the ECR repo has a valid 'latest' image (otherwise the ECS service won't get created correctly).
Now in the separate infrastructure codebase you can create the ECS cluster and service as normal, getting the ECR repository from a lookup:
const repo = ecr.Repository.fromRepositoryName(this, 'SomeRepository', 'reponame'); // reponame here has to match what you defined in the bbSource previously
const cluster = new ecs.Cluster(this, `Cluster`, { vpc });
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster,
serviceName: 'servicename',
taskImageOptions: {
image: ecs.ContainerImage.fromEcrRepository(repo, 'latest'),
containerName: repo.repositoryName,
containerPort: 8080,
},
});
Finally create a deployment construct which listens to ECR events, manually converts the generated imageDetail.json file into a valid imagedefinitions.json file, then deploys to the existing service.
const sourceOutput = new cp.Artifact();
const ecrAction = new cpa.EcrSourceAction({
actionName: 'ECR-action',
output: sourceOutput,
repository: repo, // this is the same repo from where the service was originally defined
});
const buildProject = new codebuild.Project(this, 'BuildProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
},
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'cat imageDetail.json | jq "[. | {name: .RepositoryName, imageUri: .ImageURI}]" > imagedefinitions.json',
'cat imagedefinitions.json',
],
},
},
artifacts: {
files: ['imagedefinitions.json'],
},
}),
});
const convertOutput = new cp.Artifact();
const convertAction = new cpa.CodeBuildAction({
actionName: 'Convert-Action',
input: sourceOutput,
outputs: [convertOutput],
project: buildProject,
});
const deployAction = new cpa.EcsDeployAction({
actionName: 'Deploy-Action',
service: service.service,
input: convertOutput,
});
new cp.Pipeline(this, 'Pipeline', {
stages: [
{ stageName: 'Source', actions: [ecrAction] },
{ stageName: 'Convert', actions: [convertAction] },
{ stageName: 'Deploy', actions: [deployAction] },
],
});
Obviously this isn't as clean as it otherwise could be once CloudFormation supports this fully, but it works pretty well.

My view on this situation is that if you are using CDK (actually is CloudFormation) to deploy latest image from ECR is very difficult.
What I end up is putting all Docker image build and CDK deploy as a one build script
In my case, is a Java application, I build the war file and prepare the DockerFile in a /docker directory
FROM tomcat:8.0
COPY deploy.war /usr/local/tomcat/webapps/
Then have the CDK script to pick up and build the image in Runtime.
const taskDefinition = new ecs.FargateTaskDefinition(this, 'taskDefinition', {
cpu: 256,
memoryLimitMiB: 1024
});
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromDockerImageAsset(
new DockerImageAsset(this, "image", {
directory: "docker"
})
)
});
This will put the image into a specific CDK ECR repository and deploy.
Therefore, I don't relies on the ECR for keeping different version of my build. Each time I need to deploy or rollback, just do it directly from the build script.

Related

Why isn't Cloud Code honoring my cloudbuild.yaml file but gcloud beta builds submit is?

I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.
When I build using the command line gcloud beta builds submit, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.
However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the
1.) The build succeeds even with the failing unit test
2.) No logging from the unit test appears in GCP logging
3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile
What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?
Thanks!
cloudbuild.yaml
steps:
- name: node
entrypoint: npm
args: ['install']
- id: "test"
name: node
entrypoint: npm
args: ['test']
options:
logging: CLOUD_LOGGING_ONLY
scaffold.yaml
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: genesys-gencloud-dev
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild: {}
launch.json
{
"configurations": [
{
"name": "Kubernetes: Run/Debug - cloudbuild",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"profile": "cloudbuild",
"watch": true,
"cleanUp": false,
"portForward": true,
"internalConsoleOptions": "neverOpen",
"imageRegistry": "gcr.io/my-gcp-project",
"debug": [
{
"image": "my-image-dev",
"containerName": "my-container-dev",
"sourceFileMap": {
"${workspaceFolder}": "/WORK_DIR"
}
}
]
}
]
}
You will need to edit your skaffold.yaml file to use Cloud Build:
build:
googleCloudBuild: {}
See https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build for more details.
EDIT: It looks like your skaffold.yaml enables cloud build for the cloudbuild profile, but that the profile isn't active.
Some options:
Add "profile": "cloudbuild" to your launch.json for 'Run on Kubernetes'.
Screenshot
Move the googleCloudBuild: {} to the top-level build: section. (In other words, skip using the profile)
Activate the profile using one of the other methods from https://skaffold.dev/docs/environment/profiles/#activation
UDPATE (from asker)
I needed to do the following:
Update skaffold.yaml as follows. In particular note the image, field under build > artifacts, and projectId field under profiles > build.
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: gcr.io/my-project-id/my-image
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild:
projectId: my-project-id
Run this command to activate the profile: skaffold dev -p cloudbuild

How do I deploy the AWS EFS CSI Driver Helm chart from https://kubernetes-sigs.github.io/aws-efs-csi-driver/ using Pulimi

I would like to be able to deploy the AWS EFS CSI Driver Helm chart hosted at AWS EFS SIG Repo using Pulumi. With Source from AWS EFS CSI Driver Github Source. I would like to avoid having almost everything managed with Pulumi except this one part of my infrastructure.
Below is the TypeScript class I created to manage interacting with the k8s.helm.v3.Release class:
import * as k8s from '#pulumi/kubernetes';
import * as eks from '#pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `1.3.6`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}
I've tried several variations on the above code, chopping of the -driver in the name, removing aws-cfs-csi-driver from the repo property, changing to latest for the version.
When I do a pulumi up I get: failed to pull chart: chart "aws-efs-csi-driver" version "1.3.6" not found in https://kubernetes-sigs.github.io/aws-efs-csi-driver/ repository
$ helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
$ pulumi version
v3.24.1
You're using the wrong version in your chart invocation.
The version you're selecting is the application version, ie the release version of the underlying application. You need to set the Chart version, see here which is defined here
the following works:
const csiDrive = new kubernetes.helm.v3.Release("csi", {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
});
If you want to use the existing code you have, try this:
import * as k8s from '#pulumi/kubernetes';
import * as eks from '#pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}

Pulumi - How do we patch a deployment created with helm chart, when values do not contain the property to be updated

I've code to deploy a helm chart using pulumi kubernetes.
I would like to patch the StatefulSet (change serviceAccountName) after deploying the chart. Chart doesn't come with an option to specify service account for StatefulSet.
here's my code
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues
}, {
dependsOn: psmdbOperator
})
const set = psmdbChart.getResource('apps/v1/StatefulSet', `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`);
I'm using Percona Server for MongoDB Operator helm charts. It uses Operator to manage StatefulSet, which also defines CRDs.
I've tried pulumi transformations. In my case Chart doesn't contain a StatefulSet resource instead a CRD.
If it's not possible to update ServiceAccountName on StatefulSet using transformations, is there any other way I can override it?
any help is appreciated.
Thanks,
Pulumi has a powerful feature called Transformations which is exactly what you need here(Example). A transformation is a callback that gets invoked by the Pulumi runtime and can be used to modify resource input properties before the resource is created.
I've not tested the code but you should get the idea:
import * as k8s from "#pulumi/kubernetes";
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues,
transformations: [
// Set name of StatefulSet
(obj: any, opts: pulumi.CustomResourceOptions) => {
if (obj.kind === "StatefulSet" && obj.metadata.name === `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`) {
obj.spec.template.spec.serviceAccountName = "customServiceAccount"
}
},
],
}, {
dependsOn: psmdbOperator
})
Seems Pulumi doesn't have straight forward way to patch the existing kubernetes resource. Though this is still possible with multiple steps.
From Github Comment
Import existing resource
pulumi up to import
Make desired changes to imported resource
pulumi up to apply changes
It seems they plan on supporting functionality similar to kubectl apply -f for patching resources.

AWS ECS Blue/Green CodePipeline: Exception while trying to read the image artifact

I wanted to create a CodePipeline which builds a container image from CodeCommit source and afterwards deploys the new image in Blue/Green fashion to my ECS service (EC2 launchtype).
The source stage is CodeCommit, which already includes appspec.json
as well as taskdef.json
The build stage is building the new
container & pushing it to ECR successfully, the file
imagedefinition.json is the BuildArtifact created at this step,
containing the container and the recently created image with its
tag corresponding to the CodeCommit commit-id.
The deploy stage
is made of action "Amazon ECS (Blue/Green)" using the
SourceArtifact and BuildArtifact as InputArtifacts, to take the
appspec and taskdef from the SourceArtifact and the image
description from the BuildArtifact, to finally deploy the new
container in Blue/Green manner.
The problem is with the image definition from the BuildArtifact. The pipeline fails in the Deploy phase with error:
""
Invalid action configuration
Exception while trying to read the image artifact file from the artifact: BuildArtifact.
""
How to properly configure the "Amazon ECS (Blue/Green)" deploy phase, so that it can use the recently created image and deploy it....by replacing placeholder IMAGE_NAME inside taskdef.json ?
Any hint highly appreciated :D
answering my own question here, hopefully it helps others who facing the same situation.
the file imagedefinitions.json is inappropriate for deploy action "Amazon ECS Blue/Green". For that you have to create file imageDetail.json within the build step and provide it as artifact to the deploy step. How ? This is how the bottom of my buildspec.yaml looks like:
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- 'image*.json'
- 'appspec.yaml'
- 'taskdef.json'
secondary-artifacts:
DefinitionArtifact:
files:
- appspec.yaml
- taskdef.json
ImageArtifact:
files:
- imageDetail.json
In the Deploy phase of CodePipeline, use DefinitionArtifact and ImageArtifact as Input Artifacts and configure them in the corresponding section "Amazon ECS task definition" and "AWS CodeDeploy AppSpec file".
Ensure that your appspec.yaml contains placeholder for the task definition. Here is my appspec.yaml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-test-container"
ContainerPort: 8000
Also ensure that your taskdef.json contains placeholder for the final image, like
...
"image": <IMAGE1_NAME>,
...
use that placeholder in the codepipeline config of your blue/green deploy phase in the section "Dynamically update task definition image - optional" by choosing the input artifact as "ImageArtifact" and the placeholder <IMAGE1_NAME>
Amazon ECS Blue/Green (or CodeDeployToECS) CodePipeline action requires the TaskDefinitionTemplateArtifact parameter (see [1]).
In addition to the above file note an imageDetail.json is required for ECS Blue/Green deployments (not 'imagedefinition.json'). The file structure and details are available here [2]. Add this file to the root of your deployment artifact/version control. If you do not want to add this file manually you can use the ECR source action to the CodePipeline and configure this with the Image you are using in the ECS service/taskdef.json. This is all discussed at [2] for clarity.
To see how this is all brought together you can also follow the step by step instructions for ECS Blue/Green deployments here [3].
References:
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements : CodePipeline Pipeline Structure Reference - Action Structure Requirements in CodePipeline
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/file-reference.html#file-reference-ecs-bluegreen : Image Definitions File Reference - imageDetail.json File for Amazon ECS Blue/Green Deployment Actions
[3] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html : Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment
I ran into the same problem.
tl:dr
I was not passing the correct input artefact with the imageDetail.json to the pipeline CodeDeployToECS action.
Summary:
Instead of checking in a version of the task definition with the '<IMAGE1_NAME>' placeholder, I'm dynamically generating the task definition input to CodeDeploy inside the pipeline.
The task definition early in the project is quite volatile, with new variables etc being passed to the container. It's generated and registered within the pipeline (Cloudformation) and then read out via a Codebuild project, substituting the image placeholder with '<IMAGE1_NAME>' and passed to the next stage in the pipeline via a pipeline artefact.
Fixing it:
I have a CodeBuild project within the pipeline that produces the imageDetail.json:
{"ImageURI":"########.dkr.ecr.eu-west-1.amazonaws.com/##/#####:2739511dd87d4e4e1f65ed69c9e779b63fb72e36-master-fbe73fdc-6213-4bd6-a784-dcc3d2ae7845"}
It's pipeline output is named 'BuildDockerOutput'
I have another Codebuild project that produces:
taskdef.json
{
"containerDefinitions": [
{
"name": "ronantest1",
"image": "<IMAGE1_NAME>",
]
}
appspec.json
{
"version": 0.0,
"Resources": [
{
"TargetService": {
"Type": "AWS::ECS::Service",
"Properties": {
"TaskDefinition": "<TASK_DEFINITION>",
"LoadBalancerInfo": {
"ContainerName": "ronantest1",
"ContainerPort": "8080"
}
}
}
}
],
"Hooks": [
{
"AfterAllowTestTraffic": "arn:aws:lambda:eu-west-1:######:function:code-deploy-after-allow-test-traffic"
}
]
}
It's pipeline output is named 'PrepareCodeDeployOutputTesting'
My final CodeDeploy action is like the following:
- Name: BlueGreenDeploy
InputArtifacts:
- Name: BuildDockerOutput
- Name: PrepareCodeDeployOutputTesting
Region: !Ref DeployRegion1
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: CodeDeployToECS
RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/######/CrossAccountsDeploymentRole
Configuration:
AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
AppSpecTemplatePath: appspec.json
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref ApplicationName
TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
TaskDefinitionTemplatePath: taskdef.json
Image1ArtifactName: BuildDockerOutput
Image1ContainerName: "IMAGE1_NAME"
RunOrder: 4
Note the different aspects of the CodeDeployToECS needed artefacts from different InputArtifacts, specifically 'Image1ArtifactName'
Thanks, to all, this gives me some light into solving the issue.
I would like to add that when you use aws cli, cloudformation, or Terraform to configure codepipeline, some parameters and options are not available with the console and setting some variables in these tools like the empty string "" will cause an exception error.
Always check for codepipeline settings in the console when you deploy using these tools.
so the error occur when you defined Image Artifact but not define the placeholder
imageDetail.json can be passed into codedeploy using the following methods:
git source ( codecommit or github ) the file that exist in your app codebase
ECR source - the file will be autogenerated by ECR, but will use SHA256 instead of the image tag
CodeBuild source - you update the file using codebuild buildspec.yml and pass it down to codedeploy stage.

Serverless: Service files not changed. Skipping deployment

After some successful projects, I have deleted the functions inside AWS-lambda, deleted the logs in CloudWatch and the IAM roles.
Also deleted the my-service folder from my Documents.
Then I followed the steps in this tutorial in serverless.
Now when I run:
serverless deploy --aws-profile testUser_atWork
where testUser_atWork is one of my profiles to connect in AWS.
I get the follow error:
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Service files not changed. Skipping deployment...
Service Information
service: my-service
stage: dev
region: us-east-1
stack: my-service-dev
api keys:
None
endpoints:
None
functions:
hello: my-service-dev-hello
//serverless.yml
service: my-service
provider:
name: aws
runtime: nodejs6.10
functions:
hello:
handler: handler.hello
And this my handler.js
'use strict';
module.exports.hello = (event, context, callback) => {
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
}),
};
callback(null, response);
// Use this code if you don't use the http event with the LAMBDA-PROXY integration
// callback(null, { message: 'Go Serverless v1.0! Your function executed successfully!', event });
};
I don't understand why it is skipping deployment.
have you tried :
serverless deploy --aws-profile testUser_atWork --force to force it to update the stack?
Otherwise, try deleting the stack in cloudformation, or with the serverless remove command