OpenShift Pro: Binary Deployment - Can It Be Done From Git - deployment

I am in the final phase of my migration from OpenShift v2 to OpenShift Pro.
I have a DEVELOPMENT version of the webapp in Eclipse Oxygen. The TEST version is deployed to OpenShift Pro via a push to Github of the source code (the normal way).
For the LIVE version I export a war to the 'deployments' folder and do a binary deployment to OpenShift Pro like this:
oc new-build --image-stream=jboss-webserver30-tomcat8-openshift --binary=true --name=live
oc start-build live --from-dir=/Users/lyndon/git/mmjlive/deployments
It works very nicely...
But I would like to be able to do this using a git push. I have created a project (mmjlive) in Eclipse that only has a configuration folder and a deployments folder (contains the war).
My questions are:-
Is it possible to trigger a binary deployment build from a git push?
If it is, then what do I need to do to achieve this?

Related

Parameterize Helm values yaml file in a CI pipeline

I have two projects:
Project A - Contains the Source code for my Microservice application
Project B - Contains the Kubernetes resources for Project A using Helm
Both the Projects reside in their own separate git repositories.
Project A builds using a full blown CI pipeline that build a Docker image with a tag version and pushes it into the Docker hub and then writes the version number for the Docker image into Project B via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
So far so good! I now have Project B which contains this Docker version for the Microservice Project A and I now want to pass / inject this value into the Values.yaml so that when I package the Project B via Helm, I have the latest version.
Any ideas how I could get this implemented?
via a git push from the CI server. It does so by committing a simple txt file with the Docker version that it just built.
What I usually do here, is that I write the value to the correct field in the yaml directly. To work with yaml on the command line, I recommend the cli tool yq.
I usually use full Kubernetes Deployment manifest yaml and I typically update the image field with this yq command:
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==myapp).image' <my-registry>/<my-image-repo>/<image-name>:<tag-name>
and after that commit the yaml file to the repo with yaml manifests.
Now, you use Helm but it is still Yaml, so you should be able to solve this in a similar way. Maybe something like:
yq write --inplace values.yaml 'app.image' <my-image>

Building multiple Gradle projects in Jenkins with AWS CodePipeline

I have a Gradle project that consists of a master project and 2 others that included using includeFlat directive. Each of these 3 projects has its own repo on GitHub. To build it I checkout all 3 projects into a common top folder then cd into the master project and run gradle build. And it works great!
Now I need to deploy the resulting app to AWS EB (Elastic Beanstalk) which is also works great when I produce the artifact locally and then deploy it manually. I want to automate the process so I'm trying to set it up using CodePipelines + Jenkins as described in this document adjusted for Gradle.
The problem is that if I specify 3 Sources in the pipe I end up with my projects extracted on top of each other creating a mess in Jenkins workspace. I need to somehow configure each project to be output to its own directory within Jenkins workspace and I just don't see a way to do it (at least in UI)
Then, of course even if I achieve what I want I need somehow to cd into the master directory to run gradle build and again I'm not sure how to do that
P.S. Great suggestions from #Phil but unfortunately is seems that CodePipeline does not currently support Git submodules or subtrees
I would start common build, when changes happened on any of 3 repos. With say 5 minutes delay, to have single build, even if changes are introduced to more then one repo.
I can't see good way to deal with deployment in other way than using eb deploy... old way... Please install aws tools at your jenkins machine. Create deployment job triggered on successful build. And put bash script doing deployment there. Please put more details about your deployment, that way I can help with deployment script.

Jenkins Plugin for AWS CodeDeploy not deploying all files in workspace

I have a freestyle project (mostly JavaScript and HTML/CSS files) in Jenkins that uses Windows PowerShell to build it specifically using npm commands. It creates a build folder and builds the project as expected with all files. Now in my Workspace on Jenkins that is pulling from a Bitbucket repo I have a number folders and files. In my other projects where I use my Jenkins Plugin for AWS CodeDeploy post build action all my files in my Workspace were deployed. For some reason when this project is deployed to my EC2 instance only my first four folders in the Workspace are being deployed and my two important folders that I need other than my build folder, src and service, are not. I have tried messing with the "Include Files" parameter in the plugin setup but nothing has been successful. Below is a screenshot of the plugin setup:
Please any help is appreciated as this has been an issue for days now, thank you.

How to deploy EAR/WAR in Jboss using Puppet

For a project I am working on, we have CI setup using Jenkins.
We now want to setup Continuous Delivery (CD) using Puppet.
Here is our dev environment specs
Windows 2008 Server
Jboss-SOA-P (jboss AS 5.1 app server) - 2 instances
Jenkins for CI
Installed Puppet Learning VM (as we are evaluating, so we don't have a license to install Puppet Enterprise).
My question is: How can I automate deployment of my application(s) on already installed Jboss servers (on Windows m/c) using Puppet?
For my organization, I am using the following tools to achieve Continuous Delivery and Continuous Integration
Foreman: A provisioning and ECN ( External Node Classifier )
Puppet Master :
This will be running in our main server
Puppet Agents:
On rest of the servers
Jenkins:
On main server
Nexus repository for maintaining staging and release repositories on another server
A nexus repository module installed in puppet master. It has the logic to connect to nexus repository, fetch the latest release from "release" repository.
Flow:
I have a jenkins job defined whose purpose is making sure the build doesn't break and at the time of release, I perform maven release
that in turn upload the latest version to nexus release repository.
Load nexus repository module in foreman and map "Nexus" class to all my servers. This is the crucial part that allows me to perform
cloud deployment with a single button click in foreman. To do this, you need to have your own puppet files that perform dbmigration, undeploy and deployment. I always write a deploy maven module for all my projects which will have only these puppet files that allows me to perform deployment in all servers in one shot.
Since you are already familiar with CI and CD, I hope my statements are self explanatory
Without knowing how you want your application deployed, it's hard to answer exactly how to do it. But I'll see if I can point you in the right direction:
There are existing modules on the forge to deploy JBoss with Puppet. I'd recommend looking through and seeing if you can find one that suits your requirements.
You could then integrate your source control to automatically deploy changes to your JBoss instance as they get checked in.
There's an example of using Puppet with Tomcat and Maven for continuous delivery here. It's a few years old, but the concepts still apply: http://www.slideshare.net/carlossg/2013-02-continuous-delivery-apache-con
Here's also an example from CloudBees for contionous delivery with Puppet and Jenkins (which has a lot of the engineers behind Jenkins) https://www.cloudbees.com/event/continuous-delivery-jenkins-and-puppet-debug-bad-bits-production
Plus a general PuppetLabs one here: https://puppetlabs.com/blog/whats-continuous-delivery-get-speed-these-great-puppetconf-decks
Also: I don't want to sound too salesly, and full disclosure, I work at PuppetLabs. But you can use Puppet Enterprise with up to 10 nodes for free, so I'd recommend that over using the Learning VM, which isn't really designed for hosting applications.

Bamboo Deployment project to Artifactoy

I've been researching this for a while but can't find an answer.
I use Bamboo 5.3 with Artifactory plugin 1.6.2. I have a build project that generates a .war and two .zips. I also have a Bamboo Deployment project that creates releases with these three files and deploys to DEV, QA and so on.
For a build project I am able to use the artifactory plugin, that's fine. The problem is that I end up with a lot of artifacts if I publish all the builds. I would like to publish to Artifactory only the files from the releases, so that is happens less often, and that the people would see only the 3-4 releases tries, not the 150 builds.
My issue is that when creating my Deployment tasks (like download, copy, call ssh script...) there is no 'Artifactory Generic Deploy', like in the build project tasks.
I see there is a new Bamboo 5.4 with some improvement around the deployment process, maybe this could help?
Support for deployment task from Bamboo to Artifactory will be available starting with version 1.8.0 of the Artifactory plugin.
Here is the Jira issue.
I faced a similar issue. Hopefully the next release of the artifactory plugin will integrate with deployment projects.
If you are willing to use Maven to broker the deployment, deploy-file can get the job done.
In the deployment project, after your artifact download task add a Maven 3.x task for each artifact you want to send.
You'll need to specify a build JDK and for environment variables I'm using MAVEN_OPTS="-DskipTests=true -XX:MaxPermSize=4096m"
For the actual maven command:
deploy:deploy-file
-Durl=http://${bamboo.artifactory_username}:${bamboo.artifactory_password}#${bamboo.artifactory_url}/artifactory/${bamboo.destinationRepo}
-DrepositoryId=localhost
-Dfile=${bamboo.pathToArtifact}/${bamboo.arftifactName}-${bamboo.majorVersion}.${bamboo.minorVersion}.${bamboo.arftifactExtension}
-DgroupId=${bamboo.arftifactGroup}
-DartifactId=${bamboo.arftifactName}
-Dversion=${bamboo.majorVersion}.${bamboo.minorVersion}
-Dpackaging=${bamboo.arftifactExtension}
-DgeneratePom=true
Hope this helps!
The artifactory API is pretty usable for this purpose. You can deploy directly using curl in a shell script.
See https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API for the details.