AWS Codestar -- Build with node.js module dependencies - aws-cloudformation

I'm using AWS Codestar. It integrates a number of AWS services so that I can go from git push to deployment.
It uses cloudformation. I have a lambda function that depends on the uuid npm.
How do I include this node dependency in the Codestar build pipeline? Cloudformation SAM use a zip file, and uploads everything to S3:
https://github.com/awslabs/serverless-application-model/blob/master/examples/2016-10-31/inline_swagger/template.yaml#L32
I don't want to build a zip file and put it into the code repo.
My next plan is to attempt running npm install in Codebuild:
http://docs.aws.amazon.com/codebuild/latest/userguide/sample-nodejs-hw.html#sample-nodejs-hw-files

Next plan works. Needed to add the npm in Codebuild. Works great.

Related

Skip plugin downloading Terraform

I'm using self-hosted agent in Azure Pipelines and I installed Terraform 0.13 there. When I use Terraform tasks in Azure Devops, as commandOptions I chose '-plugin-dir=/usr/local/bin/.terraform.d/plugins' to skip plugin downloading. Unfortunately, Terraform downloads it to artifact and makes it much heavier than it should be. Also next stage (deployment stage) uses only plugins from artifact, not from our agent.
We do not have much space on our virtual machine that's why we want to avoid unnecessary downloads.
In addition, we defined .terraformrc in home directory with plugin directory. Also we added environment variable as written there:
https://www.terraform.io/docs/commands/cli-config.html#provider-installation
Thank you in advance!
You can try to set -get-plugins=false option.
-get-plugins=false — Skips plugin installation. Terraform will use plugins installed in the user plugins directory, and any plugins already installed for the current working directory. If the installed plugins aren't sufficient for the configuration, init fails.
This is stated in this document.
Eventually I did it another way - I used Cache task from Azure Pipelines. Here's a solution from ITNext:
https://itnext.io/infrastructure-as-code-iac-with-terraform-azure-devops-f8cd022a3341
This is how I described Cache task:
Describe keys well and choose right path.

How do I tell Codeship to ignore the node_modules folder?

I am deploying to AWS EB from Codeship. Codeship does an npm install to run the tests. It them bundles everything and sends it to AWS, where another npm install happens.
How do I prevent Codeship from bundling my node_modules folder?
The integrated Elastic Beanstalk deployment is based on copying the files over to AWS, so if you want to "ignore" a folder, add a script based deployment before the Elastic Beanstalk deployment and remove the folders you don't want to copy over.
See https://github.com/codeship/scripts/blob/master/deployments/elastic_beanstalk.sh for a script that is very similar (though not quite identical) to the commands run for the integrated deployment.
And see https://documentation.codeship.com/basic/builds-and-configuration/deployment-pipelines/#multi-step-deployment-pipelines for a bit more information on deployment pipelines containing multiple individual steps.

Building multiple Gradle projects in Jenkins with AWS CodePipeline

I have a Gradle project that consists of a master project and 2 others that included using includeFlat directive. Each of these 3 projects has its own repo on GitHub. To build it I checkout all 3 projects into a common top folder then cd into the master project and run gradle build. And it works great!
Now I need to deploy the resulting app to AWS EB (Elastic Beanstalk) which is also works great when I produce the artifact locally and then deploy it manually. I want to automate the process so I'm trying to set it up using CodePipelines + Jenkins as described in this document adjusted for Gradle.
The problem is that if I specify 3 Sources in the pipe I end up with my projects extracted on top of each other creating a mess in Jenkins workspace. I need to somehow configure each project to be output to its own directory within Jenkins workspace and I just don't see a way to do it (at least in UI)
Then, of course even if I achieve what I want I need somehow to cd into the master directory to run gradle build and again I'm not sure how to do that
P.S. Great suggestions from #Phil but unfortunately is seems that CodePipeline does not currently support Git submodules or subtrees
I would start common build, when changes happened on any of 3 repos. With say 5 minutes delay, to have single build, even if changes are introduced to more then one repo.
I can't see good way to deal with deployment in other way than using eb deploy... old way... Please install aws tools at your jenkins machine. Create deployment job triggered on successful build. And put bash script doing deployment there. Please put more details about your deployment, that way I can help with deployment script.

Triggering Octopus deploy when files are available

I am currently doing an Octopus project using which I am trying to automate the below process,
Copy the installation files from the folder (TFS automatically drops the new builds to this place) to the octopus tentacle
Install and configure the application
Run the automated tests created using SOAP UI pro on the installed product
Send mail notifications to the user
Revert back the machine/Uninstalling the application
I have implemented all the above process using power shell in Octopus deploy. Only thing I am missing is the trigger process for project.
Requirement: Trigger the Octopus project containing above processes once a new build is created in TFS or new build is placed in the folder
There are two actions to "Trigger" Octopus Deploy to perform the steps defined in the project process, that can be initialised in a number of ways
Using the UI
1) Create a release
2) Deploy the release.
Using the API
1) Create a release and then instruct that release to be deployed to an environment (the important switch here is --deployto)
octo.exe create-release --server http://xxx --apikey SECRET --project xxx --version x.x.x --packageversion=x.x.x --deployto PRODUCTION
Note: this can also be done in two steps
Using Lifecycles
1) Create a release manually or using the API
2) Allow lifecycles to control what happens in environments when a release is created
Octopus Lifecycles Documentation
Hope this helps
You need to have the TFS build server upload newly built nuget packages to the Octopus Deploy server and create a Release post build.
https://octopus.com/blog/using-octopus-and-tfs-builds

How to execute Octo.exe from VSTS?

I wish to execute Octo.exe from a powershell script on VSTS. Like this
Octo.exe push --package $_.FullName --replace-existing --server https://deploy.mydomain.com --apiKey API-xxxxxxxx
But I don´t know the correct path for Octo.exe or if it is present on the VSTS? Is it possible install it there? Or will i have to add the octo.exe to my source and call it from there?
You can’t call Octo.exe command if using Hosted build agent and it is impossible to install it on build agent too.
If you can call Octo.exe without install it, you can add octo.exe to the source control and map to build agent (Repository > Mappings), then you can call it via PowerShell. The path could be like $(build.sourcesdirectory)\Tool\octo.exe, according to how do you map it to the source directory)
If Octo.exe requires to install, you need to set up an on premise build agent and install Octo on that build agent.
On the other hand, there is the extension of Octopus Deploy Integration that you can install and use it directly.
Instead of cluttering source code repository with binaries,
the cleanest approach is using the Octopus REST APIs for pushing a package.
An example on how to push a package is provided by the Octopus company itself.