Deployment(CI-CD) pipeline for serverless application - deployment

I have created a simple node express MongoDB app which has 3 API endpoints to perform basic crud operations.
If I was to deploy this to Heroku as a service and use bitbucket-pipeline to perform CI-CD this would do the job for me. On top of this, I can have Heroku pipelines to have multiple stages of environments like dev and production.
And after doing all above I would be done with my pipeline and happy about it.
Now coming back to Serverless, I have deployed my API endpoints to AWS as lambda functions, And that is the only environment (let's say DEV) present at the moment.
Now how can I achieve a pipeline similar to the one mentioned earlier in a serverless architecture?
All the solutions out there do not suggest (maybe I missed some) promoting the actual code which is tried and tested on dev env to Production. But rather a deploy a new set of code, is this a limitation?

Option 1
Presuming that you are developing a Node Serverless application, deploying a new set of code with the same git commit ID and package-lock.json/yarn.lock should result in the same environment. This can be achieved by executing multiple deploy commands to different stages e.g.
sls deploy -s dev
sls deploy -s prod
There are various factors that may cause the deployed environments to be different, but the risk of that should be very low. This is the simplest CI/CD solution you can implement.
Option 2
If you'd like to avoid the risk from Option 1 at all cost, you can split the package and deployment phase in your pipeline. Create the package before you deploy from the codebase that you have checked out:
sls package -s dev --package build/dev
sls package -s prod --package build/prod
Archive as necessary, then to deploy:
sls deploy -s dev --package build/dev
sls deploy -s prod --package build/prod
Option 3
This is an improved version of Option 2. I have not tried this solution but it should theoretically be possible. The problem with Option 2 is that you have to execute the package command multiple times which might not be desirable YMMV. To avoid the need of packaging more than once, first create the package:
sls package -s dev --package build
Then to deploy:
# Execute a script to modify build/cloudformation-template-update-stack.json to match dev environment
sls deploy -s dev --package build
# Execute a script to modify build/cloudformation-template-update-stack.json to match prod environment
sls deploy -s prod --package build
If you have the following resource in build/cloudformation-template-update-stack.json for example:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-dev-bucket"
}
},
The result of the script you execute before sls deploy should modify the CF resource to:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-prod-bucket"
}
},
This option of course will imply that you can't have any hardcoded resource name in your app, every resource names must be injected from serverless.yml to your Lambdas.

Related

Github Actions : share steps across jobs

In my Github actions workflow, I have a job where I install eksctl :
install-cluster:
name: "staging - create cluster"
runs-on: ubuntu-latest
steps:
- name: Setup eksctl
run: |
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
I have then another job (within the same workflow) where I deploy some applications. How can I use eksctl installation step in my second job. Currently, I add a step in my second job to have eksctl installed. Is the cache the only way of doing this ? Like I install a tool & keep its binaries in the cache & get it back in my second job ?
A GitHub action job will run on its own runner, which means you can't re-use a binary in your second job that was installed in your first job. If you want to make it available without having to reinstall it, you'll likely have to use an action to upload[1] and download[2] the artifact in your second job.
If you'd like to cache the binary within the same job, and re-use it in other steps you can set up https://github.com/actions/cache
References:
https://github.com/actions/upload-artifact
https://github.com/actions/download-artifact

Tests that require orchestration of multiple containers in GitLab CI/CD

I am considering porting a legacy pipeline that builds and tests Docker/OCI images into GitLab CI/CD. I already have a GitLab Runner in a Kubernetes cluster and it's registered to a GitLab instance. Testing a particular image requires running certain commands inside (for running unit tests, etc.). Presumably this could be modeled by a job my_test like so:
my_test:
stage: test
image: my_image_1
script:
- my_script.sh
However, these tests are not completely self-contained but also require the presence of a second container (a database, i.e.). At the outset, I can imagine one, perhaps suboptimal way for handling this (there would also have to be some logic for waiting until my_image2 has started up and a way for kubectl to obtain sufficient credentials):
before_script: kubectl deployment create my_deployment2 ...
after_script: kubectl delete deployment my_deployment2 ...
I am fairly new to GitLab CI/CD so I am wondering: What is best practice for modeling a test like this one, i.e. situations where tests requires orchestration of multiple containers? (Does this fit into the scope of a GitLab job or should it better be delegated to other software that my_test could talk to?)
Your first look should be at Services.
With services you can start a container running MySQL or Postgres and run tests which will connect to it.

How to take Jenkins master node offline using CLI?

I have been manually taking the master (and only) node offline whenever I sync our development database to avoid a bunch of false test failures. I have a script that does the full DB import and would like to automate the node maintenance as well.
How can I mark the master node temporarily offline using the command-line interface JAR?
I wrote a simple Bash script to execute Jenkins tasks.
I can authenticate using that script.
$ jenkins who-am-i
Authenticated as: david
Authorities:
david
authenticated
However, I cannot get offline-node or online-node to recognize the master node. The help states that I can omit the node name for "master", but that doesn't work.
$ jenkins offline-node
Argument "NAME" is required
java -jar jenkins-cli.jar online-node NAME
Stop using a node for performing builds temporarily, until the next "online-node" command.
NAME : Slave name, or empty string for master
It seems to be looking specifically for a slave, but I need to take the master's executor offline.
$ jenkins offline-node master
No such slave "master" exists. Did you mean "null"?
It's not exactly intuitive, but the Jenkins documentation is correct. If you want to specify the master node for offline-node or online-node, use the empty string:
java -jar jenkins-cli.jar offline-node ""
That said, you should probably use #gareth_bowles answer anyways in case you add slaves in the future.
If you only have one build executor on the master and no standalone build nodes, use this command instead:
java -jar jenkins-cli.jar quiet-down
This will stop any new builds from executing. You can use
java -jar jenkins-cli.jar cancel-quiet-down
to put Jenkins back on line; at this point it will run any builds that were queued up while it was offline.

Capistrano duplicate tasks for each role

I must be missing something with Capistrano, because I've just started writing capfiles and I'm looking at tons of duplicated code. Consider this
role :dev, "dev1", "dev2"
role :prod, "prod1", "prod2"
desc "Deploy the app in dev"
task :deploy_dev, :roles => :dev do
run "sudo install-stuff"
end
desc "Deploy the app in prod"
task :deploy_prod, :roles => :prod do
run "sudo install-stuff"
end
IMO it's totally reasonable to want to run the exact same task in dev or prod, but from what I can tell, Capistrano would have me write 2 tasks just to specify the different nodes...
Seems like if you could refer to roles on the CLI like
cap deploy dev
cap deploy prod
there could be a single definition of the 'deploy' task in the capfile, as opposed to a duplicated one for each set of servers.
Is there a way to write a task once and specify the role dynamically?
Have a look at the multistage extension. While fairly easy to set up the tasks you need yourself, the multistage extension will do it all for you.
If you'd rather do it yourself, see the calling tasks section of the handbook. The trick is that you can invoke different tasks in order from the command line.

Executing Presto Task for QA and Production but not in Dev

I have a task that needs to run in QA and prod, but not dev. The task is to stop a clustered application. The problem is that the dev servers aren’t clustered and the task to stop the cluster fails on these servers. Is there a way to handle this?
We used to have that issue as well. When the task ran to stop the cluster, it would fail in dev:
The system cannot find the path specified
C:\Windows\Sysnative\Cluster.exe /cluster:server resource "Company Name Product" /offline
To get this to work, we can move the cluster commands to variables instead of directly in the task. That way we can have the dev version of stopping the cluster just do a no-op: cmd /exit. The QA version will run the real cluster stop command.
Task:
Dev Server Variable Group:
QA Server Variable Group: