Serverless: Service files not changed. Skipping deployment - deployment

After some successful projects, I have deleted the functions inside AWS-lambda, deleted the logs in CloudWatch and the IAM roles.
Also deleted the my-service folder from my Documents.
Then I followed the steps in this tutorial in serverless.
Now when I run:
serverless deploy --aws-profile testUser_atWork
where testUser_atWork is one of my profiles to connect in AWS.
I get the follow error:
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Service files not changed. Skipping deployment...
Service Information
service: my-service
stage: dev
region: us-east-1
stack: my-service-dev
api keys:
None
endpoints:
None
functions:
hello: my-service-dev-hello
//serverless.yml
service: my-service
provider:
name: aws
runtime: nodejs6.10
functions:
hello:
handler: handler.hello
And this my handler.js
'use strict';
module.exports.hello = (event, context, callback) => {
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless v1.0! Your function executed successfully!',
input: event,
}),
};
callback(null, response);
// Use this code if you don't use the http event with the LAMBDA-PROXY integration
// callback(null, { message: 'Go Serverless v1.0! Your function executed successfully!', event });
};
I don't understand why it is skipping deployment.

have you tried :
serverless deploy --aws-profile testUser_atWork --force to force it to update the stack?
Otherwise, try deleting the stack in cloudformation, or with the serverless remove command

Related

Pulumi "error deleting Autoscaling Launch Configuration" with aws.eks.NodeGroup

I am getting the following error while running pulumi up, I am getting a templateBody update in the preview for aws:cloudformation:Stack spot-ng-01-nodes.
aws:ec2:LaunchConfiguration (spot-ng-01-nodeLaunchConfiguration):
error: deleting urn:pulumi:staging::xx-api::eks:index:NodeGroup$aws:ec2/launchConfiguration:LaunchConfiguration::spot-ng-01-nodeLaunchConfiguration: 1 error occurred:
* error deleting Autoscaling Launch Configuration (spot-ng-01-nodeLaunchConfiguration-3a59b7e): ResourceInUse: Cannot delete launch configuration spot-ng-01-nodeLaunchConfiguration-3a59b7e because it is attached to AutoScalingGroup spot-ng-01-d1815eb6-NodeGroup-UBM7XABBGVNU
status code: 400, request id: fc55d507-0884-4c50-aeba-33831646a914
This is the resource in question, but the code was not updated.
new eks.NodeGroup("spot-ng-01", {
cluster: cluster,
spotPrice: "0.1",
instanceType: "t3.xlarge",
taints,
labels: { spot: "true" },
version: "1.21",
maxSize: 60,
minSize: 1,
nodeSubnetIds: options.vpc.privateSubnetIds,
instanceProfile: new aws.iam.InstanceProfile("spot-ng-profile-01", { role: role.name }),
nodeAssociatePublicIpAddress: false,
nodeSecurityGroup: clusterSG,
clusterIngressRule: cluster.eksClusterIngressRule,
autoScalingGroupTags: {
Name: "spot",
"k8s.io/cluster-autoscaler/enabled": "true",
[`k8s.io/cluster-autoscaler/${clusterName}`]: "true",
},
});
Even after running pulumi refresh, I still get the error.
The solution required manual intervention, it might not be the best but it solved the issue.
Another LaunchConfiguration was created by pulumi, I made this new LaunchConfiguration used by the AutoscalingGroup in question. then I ran pulumi up and it was able to delete the LaunchConfiguration that was stuck. Then ran pulumi refresh.

How to automate deployment to ECS Fargate when new image is pushed to ECR repository

Firstly, this is specific to CDK - I know there are plenty of questions/answers around this topic out there but none of them are CDK specific.
Given that best practices dictate that a Fargate deployment shouldn't look for the 'latest' tag in an ECR repository, how could one set up a CDK pipeline when using ECR as a source?
In a multi-repository application where each service is in it's own repository (where those repositories would have their own CDK CodeBuild deployments to set up building and pushing to ECR), how would the infrastructure CDK pipeline be aware of new images being pushed to an ECR repository and be able to deploy that new image to the ECS Fargate service?
Since a task definition has to specify an image tag (else it'll look for 'latest' which may not exist), this seems to be impossible.
As a concrete example, say I have the following 2 repositories:
CdkInfra
One of these repositories would be created for each customer to create the full environment for their application
SomeService
Actual application code
Only one of this repository should exist and re-used by multiple CdkInfra projects
cdk directory defining the CodeBuild project so when a push to master is detected, the service is built and the image pushed to ECR
The expected workflow would be as such:
SomeService repository is updated and so a new image is pushed to ECR
The CdkInfra pipeline should detect that a tracked ECR repository has a new image
The CdkInfra pipeline updates the Fargate task definition to reference the new image's tag
The Fargate service pulls the new image and deploys it
I know there is currently a limit with CodeDeploy not supporting ECS deployments due to CFN not supporting them, but it seems that CodePipelineActions has the ability to set up an EcrSourceAction which may be able to achieve this, however I've been unable to get this to work so far.
Is this possible at all, or am I stuck waiting until CFN support ECS CodeDeploy functionality?
You could store the name of the latest tag in an AWS Systems Manager (SSM) parameter (see the list here), and dynamically update it when you deploy new images to ECR.
Then, you could use the AWS SDK to fetch the value of the parameter during your CDK deploy, and then pass that value to your Fargate deployment.
The following CDK stack written in Python uses the value of the YourSSMParameterName parameter (in my AWS account) as the name of an S3 bucket:
from aws_cdk import (
core as cdk
aws_s3 as s3
)
import boto3
class MyStack(cdk.Stack):
def __init__(self, scope, construct_id, **kwargs):
super().__init__(scope, construct_id, **kwargs)
ssm = boto3.client('ssm')
res = ssm.get_parameter(Name='YourSSMParameterName')
name = res['Parameter']['Value']
s3.Bucket(
self, '...',
bucket_name=name,
)
I tested that and it worked beautifully.
Alright so after some hackery I've managed to do this.
Firstly, the service itself (in this case it's a Spring Boot project) gets a cdk directory in it's root. This basically just sets up the CI part of the CI/CD pipeline:
const appName: string = this.node.tryGetContext('app-name');
const ecrRepo = new ecr.Repository(this, `${appName}Repository`, {
repositoryName: appName,
imageScanOnPush: true,
removalPolicy: cdk.RemovalPolicy.DESTROY,
});
const bbSource = codebuild.Source.bitBucket({
// BitBucket account
owner: 'mycompany',
// Name of the repository this project belongs to
repo: 'reponame',
// Enable webhook
webhook: true,
// Configure so webhook only fires when the master branch has an update to any code other than this CDK project (e.g. Spring source only)
webhookFilters: [codebuild.FilterGroup.inEventOf(codebuild.EventAction.PUSH).andBranchIs('master').andFilePathIsNot('./cdk/*')],
});
const buildSpec = {
version: '0.2',
phases: {
pre_build: {
// Get the git commit hash that triggered this build
commands: ['env', 'export TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}'],
},
build: {
commands: [
// Build Java project
'./mvnw clean install -Dskiptests',
// Log in to ECR repository that contains the Corretto image
'aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 489478819445.dkr.ecr.us-west-2.amazonaws.com',
// Build docker images and tag them with the commit hash as well as 'latest'
'docker build -t $ECR_REPO_URI:$TAG -t $ECR_REPO_URI:latest .',
// Log in to our own ECR repository to push
'$(aws ecr get-login --no-include-email)',
// Push docker images to ECR repository defined above
'docker push $ECR_REPO_URI:$TAG',
'docker push $ECR_REPO_URI:latest',
],
},
post_build: {
commands: [
// Prepare the image definitions artifact file
'printf \'[{"name":"servicename","imageUri":"%s"}]\' $ECR_REPO_URI:$TAG > imagedefinitions.json',
'pwd; ls -al; cat imagedefinitions.json',
],
},
},
// Define the image definitions artifact - is required for deployments by other CDK projects
artifacts: {
files: ['imagedefinitions.json'],
},
};
const buildProject = new codebuild.Project(this, `${appName}BuildProject`, {
projectName: appName,
source: bbSource,
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
environmentVariables: {
// Required for tagging/pushing image
ECR_REPO_URI: { value: ecrRepo.repositoryUri },
},
},
buildSpec: codebuild.BuildSpec.fromObject(buildSpec),
});
!!buildProject.role &&
buildProject.role.addToPrincipalPolicy(
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['ecr:*'],
resources: ['*'],
}),
);
Once this is set up, the CodeBuild project has to be built manually once so the ECR repo has a valid 'latest' image (otherwise the ECS service won't get created correctly).
Now in the separate infrastructure codebase you can create the ECS cluster and service as normal, getting the ECR repository from a lookup:
const repo = ecr.Repository.fromRepositoryName(this, 'SomeRepository', 'reponame'); // reponame here has to match what you defined in the bbSource previously
const cluster = new ecs.Cluster(this, `Cluster`, { vpc });
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster,
serviceName: 'servicename',
taskImageOptions: {
image: ecs.ContainerImage.fromEcrRepository(repo, 'latest'),
containerName: repo.repositoryName,
containerPort: 8080,
},
});
Finally create a deployment construct which listens to ECR events, manually converts the generated imageDetail.json file into a valid imagedefinitions.json file, then deploys to the existing service.
const sourceOutput = new cp.Artifact();
const ecrAction = new cpa.EcrSourceAction({
actionName: 'ECR-action',
output: sourceOutput,
repository: repo, // this is the same repo from where the service was originally defined
});
const buildProject = new codebuild.Project(this, 'BuildProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
},
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'cat imageDetail.json | jq "[. | {name: .RepositoryName, imageUri: .ImageURI}]" > imagedefinitions.json',
'cat imagedefinitions.json',
],
},
},
artifacts: {
files: ['imagedefinitions.json'],
},
}),
});
const convertOutput = new cp.Artifact();
const convertAction = new cpa.CodeBuildAction({
actionName: 'Convert-Action',
input: sourceOutput,
outputs: [convertOutput],
project: buildProject,
});
const deployAction = new cpa.EcsDeployAction({
actionName: 'Deploy-Action',
service: service.service,
input: convertOutput,
});
new cp.Pipeline(this, 'Pipeline', {
stages: [
{ stageName: 'Source', actions: [ecrAction] },
{ stageName: 'Convert', actions: [convertAction] },
{ stageName: 'Deploy', actions: [deployAction] },
],
});
Obviously this isn't as clean as it otherwise could be once CloudFormation supports this fully, but it works pretty well.
My view on this situation is that if you are using CDK (actually is CloudFormation) to deploy latest image from ECR is very difficult.
What I end up is putting all Docker image build and CDK deploy as a one build script
In my case, is a Java application, I build the war file and prepare the DockerFile in a /docker directory
FROM tomcat:8.0
COPY deploy.war /usr/local/tomcat/webapps/
Then have the CDK script to pick up and build the image in Runtime.
const taskDefinition = new ecs.FargateTaskDefinition(this, 'taskDefinition', {
cpu: 256,
memoryLimitMiB: 1024
});
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromDockerImageAsset(
new DockerImageAsset(this, "image", {
directory: "docker"
})
)
});
This will put the image into a specific CDK ECR repository and deploy.
Therefore, I don't relies on the ECR for keeping different version of my build. Each time I need to deploy or rollback, just do it directly from the build script.

Deployed IBM cloud function (nodejs) using manifest yaml with dependencies execution fails

I've deployed nodejs based IBM cloud function using manifest file. I'll have few other functions which may share some common codes. Here is the folder structure
manifest.yml
actions/
- myFunction1/
-- index.js
-- package.json
- myFunction2/
-- index.js
-- package.json
- common/
-- utils.js
Here is my manifest.yml -
packages:
myfunctions:
version: 1.0
license: Apache-2.0
actions:
myFunction1:
function: actions/myFunction1
runtime: nodejs:10
include:
- ["actions/common/*.js", "./common/"]
myFunction2:
function: actions/myFunction2/index.js
runtime: nodejs:10
I deployed the functions using following command from cmd-
ibmcloud fn deploy --manifest manifest.yml
The deployment went successful, both functions are created. Second function(myFunction2) executes properly but the first function throws error when try to execute. Here is the error message -
{
"error": "Initialization has failed due to: There was an error uncompressing the action archive."
}
I even tried with the inclusion of the dependencies in manifest and codes but throws same error. I was following this article -
https://medium.com/openwhisk/whisk-deploy-zip-actions-with-include-exclude-30ba6d96ad8b
Still struggling, appreciate any help.
Thanks
Musa

In VSTS, async Jest tests that connect to dockerized ArangoDB database time out

I'm trying to run jest tests with a Visual Studio Team Services build. These tests run fine and pass locally, but timeout when I run them in VSTS. For each async test that connects to the database, I get
Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.
Here's my setup:
graphql API using Apollo Server
ArangoDB database insider a docker container
A typical test looks like this:
const database = require('../models')
...
describe('database setup', () => {
it('sets up the database and it exists', () => {
console.log(database.db)
const collection=database.db.collection('agents')
console.log(collection)
return database.db.exists().then((result) => {
expect(result).toBeTruthy()
}).catch(err => {console.log(err)})
.then(x => console.log(x))
})
}
...
describe('help functions', () => {
it('gets edge count for a node', async () => {
let result = await database.getEdgeCount('nodes/1', 'inbound')
expect(result).toBeGreaterThan(2)
})
})
I'm running the tests in VSTS with an NPM task. The YAML for this task is basic:
steps:
- task: Npm#1
displayName: npm test
inputs:
command: custom
workingDir: api
verbose: false
customCommand: 'test --runInBand'
I know that the tests are connecting to the database because I can console.log the database object and get the database information.
Other things I've tried:
Promise tests that don't hit the database, such as
it('foo', async () => {
await Promise.resolve()
expect(1).toEqual(1)
})
These pass
Increasing the timeout to 30000. This causes a couple of the tests with database calls to return null.
I was able to fix this, and I think there were two issues going on:
The API was not actually connecting to the database. I was able to fix this by creating a new docker network and attaching both the database and VSTS build agent, as described in this other answer
The tests were starting before the database had completely started up. I added a sleep command in a bash script before the tests which seemed to fix this.

Need to configure serverless resource output to get api gateway api id

I have a serverless project that is creating an API Gateway API amongst other things. One of the functions in the project needs to generate a URL for an API endpoint.
My plan is to get the API ID using a resource output in serverless.yml then create the URL and pass it through to the lambda function as an env parameter.
My problem/question is how to get the API ID as a cloud formation output in serverless.yml?
I've tried:
resources:
Outputs:
RESTApiId:
Description: The id of the API created in the API gateway
Value:
Ref: name-of-api
but this give the error:
The CloudFormation template is invalid: Unresolved resource dependencies [name-of-api] in the Outputs block of the template
You can write something like this in the serverless.yml file:
provider:
region: ${opt:region, 'eu-west-1'}
stage: ${opt:stage, 'dev'}
environment:
REST_API_URL:
Fn::Join:
- ""
- - "https://"
- Ref: "ApiGatewayRestApi"
- ".execute-api."
- ${self:provider.region}
- Ref: "AWS::URLSuffix"
- "/"
- ${self:provider.stage}"
Now you can call serverless with optional commandline options --stage and/or --region to override the defaults defined above, e.g:
serverless deploy --stage production --region us-east-1
In your code you can then use the environment variable REST_API_URL
node.js:
const restApiUrl = process.env.REST_API_URL;
python:
import os
rest_api_url = os.environ['REST_API_URL']
Java:
String restApiUrl = System.getenv("REST_API_URL");
The serverless framework has a documentation page on how they generate names for resources.
See. AWS CloudFormation Resource Reference
So the generated RestAPI resource is called ApiGatewayRestApi.
Unfortunately, the documentation doesn't mention it:
resources:
Outputs:
apiGatewayHttpApiId:
Value:
Ref: HttpApi
Export:
Name: YourAppHttpApiId