I'm trying to split up a few Jenkins jobs using the Build Flow plugin so that instead of three monolithic jobs, we have three "starting points" that then use the DSL to trigger downstream jobs. I chose Build Flow over the Build Pipeline plugin because it seemed like it was a lot harder to share jobs between different pipelines ( ie, sharing the workspace of the multiple starting jobs with a single compile job ).
Previously, I had three jobs set up: Project-PR, Project-DEV, and Project-PROD.
Project-PR would build whenever a pull request happened in GitHub, and would just run a smaller subset of our unit tests, so that we could get quick verification that the PR is okay to merge.
Project-DEV would build whenever a feature branch was merged in GitHub into the main development branch, as well as having the ability to be manually triggered and given a different branch to pull. It would run the full suite of unit -- basically a sanity check that everything is still good. Then it would compile and minify, and push to a QA environment for testing, and then it would run the full suite of integration tests against that QA environment. This step was configured as a parametrized build, with the parameter being the name of the branch to pull, test, and push. It would push to and set up QA environment specific to that branch, so that we could QA multiple features without having to merge to development ( ie, feature-one.qa.example.com, feature-two.qa.example.com ).
Project-PROD would only ever be manually triggered, and would do the full unit and integration test suite, compile and minify the front-end code ( Less, JS, and CSS ), and push the built code into a special "release branch" in GitHub that can then be deployed -- we haven't quite reached the point of Jenkins being in charge of deployment.
Now, what I wanted to set up was to split the subtasks into their own jobs, so that it'd be easy to set up new jobs to without having to copy and paste all the build steps ( or copying the job and changing all the things that need to be unique ). This would let us do things like create a copy of the Project-DEV, but switch out the last job for one that deploys to a staging environment set up in the cloud. Or easily create a job that could report test results to a third party source, ie copy the results to a shared network folder or something. Or any number of things. The goal is basically to use these subtask jobs as building blocks to let us build more complicated jobs, while also making it easier to update how one portion of the build works ( for example, maybe we switch to a different technology for compiling, which might change how Jenkins would compile the code ).
For example, the Project-PR would be split into the following:
Project-PULL -> Project-SetupBuildEnv -> Project-PartialUnitTests
(BuildFlow) (Normal Job) (Normal Job)
The SetupBuildEnv would just pull down any NPM or Composer requirements, and set up the directories required for testing and building. PartialUnitTests then run, and report it's results back up to the
The Project-DEV could be split up like so:
Project-DEV -> Project->SetupBuildEnv -> Project-FullUnitTests -> Project-Compile -> Project-Minify -> Project-DeployQA -> Project-FullIntegrationTests
This way, the parts of the build process that are shared ( in this case, Project-SetupBuildEnv ) can be easily shared between jobs, reducing duplication, and making it easier to update a step in the build process without having to remember EVERY job that uses that step.
Right now, I'm using the Shared Workspace plugin so that all the steps use the same workspace. However, I'm running into an issue with that: it's not actually using one workspace. What's happening is that the Build Flow job will get a directory ( eg: /sharedspace/shared_one ), and download the code from GitHub into there. Then it will trigger the DSL, which starts up the 'SetupBuildEnv' job. But instead of working inside the same directory, it will get a directory with a name like "/sharedspace/shared_one#2", and run the build setup task in there. Then when it goes to do the third step ( unit testing ), it fails, because now it's got a third directory ( /sharedspace/shared_one#3 ), but that directory didn't have the setup run, so the required node and composer modules are missing. What's weird is that it looks like the Shared Workspace plugin is copying the first shared workspace to another directory and incrementing a counter ( the #N part of the directory name ) and giving that to the other jobs to work in.
So, question time:
is there a way to fix the Shared Workspace plugin so that it's actually only using one directory for each job?
if not, is it possible to have the Clone Workspace plugin take an argument, so I can specify what archived workspace to use instead of using the dropdown?
another possiblity: would using the shared workspace plugin, but use the "Local subdirectory for repo (optional)" option in the advanced git job options to specify the directory to use?
failing all that, is there some other way to set up a build pipeline that can share jobs with other pipelines that I've missed out on?
In my experience, even if you do get this working, this might not be a scalable way to go longer term. We've found the shared workspace plugin entirely a bad idea for long / complex builds (similar reasons to yours - but also: scaling across dozens of slaves becomes hard suddenly). Arguably the idea is slightly against the spirit of modern scalable CI.
I'd instead delegate more to your build tools, be they Maven / Gradle, Ant, even Grunt, whatever. If you want to keep these builds truly modular, but can't afford to rebuild at each step (we decided full independence was worth wasting a few minutes per build) then perhaps look at creating useful artefacts at key stages - in your case, minified assets TARs, library JARs, or maybe webjars, or whatever, and deploy them to a (Maven?) repo.
Later build steps in your pipeline can quickly, easily, and repeatably pull the latest (or named version) assets from this centralised repo, and continue with the build process.
An alternative (with similarities) is to build one or more assets, but only promote them after increasing numbers of tests are run, which can be done in separate builds coordinated by your build flow, using the Promoted Builds plugin etc.
Related
Our repositories has folders, the code in the folder are sometimes dependent on code in other folders, but only in one direction. For way of explanation:
C depends on B
B depends on A
We have 3 builds required on our Pull Request policy for master:
We have a build (BuildC) that builds ONLY folder C
We have a build (BuildB) that builds B and C
We have a build (BuildA) that builds A, B, and C
The policy specifies:
Changes in folder C require BuildC
Changes in Folder B require BuildB
Changes in Folder A require BuildA
Desired effect:
Depending on the case, I want the Pull Request to require ONLY ONE of the three builds. Here are the cases:
BuildA - Should run when there are changes in folder A (even if there are changes elsewhere)
BuildB - Should run when there are changes in B (and/or C) but NOT IN A. If there are changes in folder A, this build should NOT run
BuildC - Should run when the only changes are in folder C... if changes exist in folder A and/or B in addition to C... this build should not run.
What actually happens is that if you change something in folder A and C, two builds run: BuildA and BuildC... and if the changes in folder C depend on folder A, then BuildC build fails. In any case, the run of buildC is a waste.
Is there a way to have Azure DevOps queue only 1 build... but the best one. So in our example case, BuildA will run but not BuildC... but if the changes were only in Folder C, it would run Build C?
There is no way to accomplish what you want using build triggers or policies. There is no "Don't build when there are changes in folder X". There are a few options though, but they require a bit of rethinking:
Option 1: Use jobs & conditions
Create a single Pipeline with a build stage and 4 jobs.
The first job uses a commandline tool to detect which projects need to be rebuilt and sets an output variable
The other 3 jobs depend on the first job and have a condition set on them to only trigger when a variable (set in the first job) has a certain value.
That way you can take complete control over the build order of all 3 projects.
Option 2: Use an orchestration pipeline
There are extensions in the marketplace to trigger another build and wait for its result.
Perform a similar calculation as in option 1 and trigger the appropriate build, waiting for its result.
Option 3: Use Pipeline Artifacts
Instead of building A+B+C in build C, download the results from A+B, then build C. This will require uploading pipeline artefacts at the end of each job and for each subsequent job to do an incremental build by downloading these artifacts and thereby skipping the build process.
You could even download the "last successful" results in case you want to skip building the code.
Option 4: Use NuGet
Instead of pipeline artifacts, use nuget packages to publish the output from Build A and consume them in Build B. Or even, publish A in job A and consume it from job B in the same build definition.
Option 5: Rely on incremental builds
If you're running on a self-hosted agent, you can turn off the "Clean" option for your pipeline, in care the same agent has built your build before, if will simply re-use the build output of the previous run, in case none of the input files have changed (and you haven't made any incorrect msbuild customizations). It will essentially skip building A if msbuild can calculate it won't need to build A.
The advantage of a single build with multiple jobs is that you can specify the order of the jobs A, B, C and can control what happens in each job. The big disadvantage is that each job adds the overhead of fetching sources or downloading artifacts. You can optimize that a bit by clearly setting the wildcards for what pieces you want to publish and to restore.
If you don't need the sources in subsequent stages (and aren't using YAML pipelines), you can use my Don't Sync Sources task (even with Git) to skip the sync step, allowing you to take control over exactly what happens in each job.
Many of these options rely on you figuring out which projects contain changed files since the last successful build. You can use the git or tfvc commandline utilities to tell you which files were changed, but creating the perfect script may be a bit harder when you have build batching turned on, in which case multiple changes will trigger your build at once, so you can't just rely on the "latest changes". In that case you may need to ure the REST API to ask Azure DevOps al the commitIds or all changeset numbers associated with this build to do the proper diff to calculate which projects contain changes.
Long-term, relying on a single build with multiple jobs or nuget packages is likely going to be easier to maintain.
I am new to using Azure DevOps builds/pipelines, as the source code for the solutions I need to build are in TFVC I am limited to using the Classic (i.e. UI) builds rather than YAML.
When I want to test changes to a build definition I sometimes want to run a clean build, i.e. ensure that sources and artifacts from earlier builds are removed before the new build run, yet leave normal builds (i.e. ones triggered by changes in TFVC) to be incremental so to make builds faster.
I am used to TeamCity which has a plethora of options with regards to managing source and artifacts retention between builds, including a simple "clean" check box when triggering a manual build.
ADO Builds seems very limited in this regards and if I want to have a clean build it seems the only option is to change the build definition, select clean, run the build, then change the build definition again to remove the clean option.
Are there better ways to manage "ad-hoc" clean sources and artifacts in ADO Builds?
Those settings are either on/off. They wouldn't accept conditional run-time variables.
That being said, you might try leveraging the "Save as draft" option. It seems to create a DRAFT pipeline definition you could execute for your changes.
You could probably just flip it back to no clean before publishing. I don't really use that feature, but I am going to guess on the back-end it is using a different temporary definition id. That will probably mean on the build agent a new folder gets created under _work. If that is the case, you probably wouldn't even need to flip the clean sources since it will not exist the first run. It also probably means if this a self-hosted agent you will have doubled the work folder size and you might have to manage that clean-up after you are done.
If it does create the second work folder, this is probably preferable as it means you won't break the incremental build on the build directly following your test with clean. Whether you did that ad-hoc or through editing the build definition.
Build.Clean variable is deprecated, you can only use Clean option to clean local repo on the agent currently.
I'd like suggest you submit a user voice at website below, product team will evaluate it carefully:
https://developercommunity.visualstudio.com/content/idea/post.html?space=21
One workaround is adding a Post Build Cleanup task in the end of your pipeline, when you want to run builds incrementally, you can disable this task.
I'm having a hard time figuring out how to correctly deploy to different environments with TeamCity (in terms of cross BuildConfiguration dependencies) and hope to get some input as to how to configure my SubProjects/BuildConfigurations properly. Lets start based on a concrete example: I made this test "TeamcityConfigurationTests" to better learn how TeamCity handled dependencies, and the current state shows the result i am looking for:
I have 3 subProjects, Dev, Test and Prod - and all associated tasks for those "environments" as seperate build configurations within that subProject. This is to more clearly visualize what is going on, and if anything breaks, to be able to see immediately what is broken (separate Build, UnitTest and DeployToDev BuildConfigurations, rather than 3 different steps in one single Build Configuration).
Ideally, i only want to build my application once in the Dev.Build step, and let the Dev.UnitTest and Dev.DeployToDev steps grab that artifact and run tests and deploy. That i got going for me, by having snapshot and artifact dependencies. But i am having trouble getting the correct artifact when i want to deploy from Dev -> Test or Test -> Prod.
My issue is to correctly reference the latest successfully DEV deployed artifact when running Test.DeployToTest - and the same for getting the latest successfully TEST deployed artifact when running Prod.DeployToProd. (Essentially i want to promote the artifact to the next environment).
Now, my issue is, if I in the Test.DeployToTest have a SnapshotDependency to Dev.DeployToDev and an artifact dependency to Dev.Build, and the VCS source has changed since Deploy to Dev has run, it triggeres running all the DEV steps again. This is not the worst part, the same happens when i run Prod.DeployToProd if the VCS source changed since the initial build on dev (because of all the snapshot dependencies). Meaning, that rather than promoting Test -> Prod, I Build and deploy whatever is currently on VCS to Dev, Test AND Prod.
How am i supposed to set this up correctly?
The only other option i am aware of, is letting Dev.DeployToDev also publish the same artifact, and only have an (LatestSuccessful) ArtifactDependency in Test.DeployToTest. I would also have to publish the artifact again in Test.DeployToTest, for letting Prod.DeployToProd only have a (LatestSuccessfull) artifact dependency to Test.DeployToTest. (This would be to get rid of the SnapshotDependencies causing previous environments to run build/deploy again in case of VCS changes). But then i am publishing the artifact 3 times, rather than just the one time when the application is originally built in DEV - which i would like to avoid. Also, i have cases where no artifact is needed for deploying to Test and Prod, so there is no artifact to depend on (essentially i only need the BuildNumber from the "Dependent" environment i want to promote from).
I hope for some input. Thank you
Regards
Frederik
For anyone wondering, i made a JetBrains support ticket, and got the following response:
Basically, there are options to resolve your case:
Option 1: use "Promote" action form the build's Actions top-right menu
(or change the type of the Deploy* configurations to deployment and
use the action from the block on the build results. This is the
preferred way: before deploying you select the build to deploy and
"promote" it to the next environment. There is also an experimental
hidden feature to hide the "Run" button: add
"teamcity.ui.runButton.caption" configuration parameter in the build
configurations to empty value.
Option 2: do not use snapshot dependency, use only artifact dependency
on the latest successful build. However, when you run the build you
cannot be sure that the last successful build you see will be
deployed: while the build is standing int he queue, another
Dev.DeployToDev can finish and then be deployed as the last
successful.
We went with option 1
I am working on a project with multiple people, a website application which requires webpack to be built, uglified, concatenated into a few files e.g. app.min.js, style.min.css etc. - As a result of this, in an effort to prevent merge conflicts we recently added the build folder to .gitignore, under the assumption that we would be able to build during deployment.
When pushing to the Master branch, we automatically "deploy" through Semaphore CI (similar to Travis) which runs composer install, npm install, and finally "npm run build" which triggers the webpack build. This is all built and then tested on the CI side of things, and then Semaphore automatically deploys to Amazon's Elastic Beanstalk where our application is hosted.
The problem with this is, it seems Semaphore doesn't upload the build it's just tested, but rather the Master branch itself which has no built JS or CSS. I'm wondering if there's a way to push these built files to deployment as well, or if running the entire build process AGAIN on Elastic Beanstalk is the only route. It seems unnecessary to have to do that process essentially 3 times, locally, CI, and then deployment. Every time a step like this is needed on EB the actual re-instantiation time gets longer, which I'd like to keep as short as possible.
Obviously if building it a 3rd time on EB is the only way to go about this then I'll have to, just wondering if there are better solutions for this whole workflow.
I haven't worked with Semaphore CI, but you might be able to use an .ebignore file.
If you create one, the cli will use that instead of your .gitignore file.
I find in some deployment situations you want the inverse of your .gitignore (all compiled, no src). It essentially lets you pick the files from your project directory that you want to deploy, in the same way as the .gitignore file.
Edit: I just noticed the documentation on aws is lacking. It only mentions file exclusion, but you can include files too.
Edit 2: I don't think Semaphore supports the use of .ebignore, so right now this solution isn't of any use. :(
I just had a great first experience with https://deploybot.com/. The can deploy directly to elastic beanstalk. It might be interesting or you.
Background:
We have one Jenkins job (Production) to build a deliverable every night. We have another job (ProductionPush) that pushes out the deliverable over a proprietary protocol to production machines the next day. This is because some production machines are only available during certain hours during the day (It also gives us a chance to fix any last-minute build breaks). ProductionPush needs access to the deliverable built by the Production job (so it needs access to the same workspace). We have multiple nodes and concurrent builds (and thus unpredictable workspaces) and prefer not to tie the jobs to a fixed node/workspace since resources are somewhat limited.
Questions:
How to make sure both jobs share the same workspace and ensure that ProductionPush runs at a fixed time the next day only if Production succeeds -- without fixing both jobs to run out of the same node/workspace? I know the Parameterized Trigger Plugin might help with some of this but it does not seem to have time delay capability and 12 hours seems too long for a quiet period.
Is sharing the workspace a bad idea?
Answer 2: Yes, sharing workspace is a bad idea. There is possibility of file locks. There is the issue of workspace being wiped out. Just don't do it...
Answer 1: What you need is to Archive the artifacts of the build. This way, the artifacts for a particular build (by build number) will always be available, regardless of whether another build is running or not, or what state the workspaces are in
To Archive the artifacts
In your build job, under Post-build Actions, select Archive the artifacts
Specify what artifacts to archive (you can use a combination of below)
a) You can archive all: *.*
b) You can archive a particular file with wildcards: /path/to/file_version*.zip
c) You can ignore the intermediate directories like: **/file_version*.zip
To avoid storage problems with many artifacts, on the top of configuration you can select Discard Old Builds, Click Advanced button, and play around with Days to keep artifacts and Max # of builds to keep with artifacts. Note that these two settings do not control for how long the actual builds are kept (other settings control that)
To access artifacts from Jenkins
In the build history, select any previous build you want.
In addition to SCM changes and revisions data, you will now have a Build Artifacts link, under which you will find all the artifacts for that particular build.
You can also access them with Jenkins' permalinks, for example
http://JENKINS_URL/job/JOB_NAME/lastSuccessfulBuild/artifact/ and then the name of the artifact.
To access artifacts from another job
I've extensively explained how to access previous artifacts from another deploy job (in your example, ProductionPush) over here:
How to promote a specific build number from another job in Jenkins?
If your requirements are to always deploy latest build to Production, you can skip the configuration of promotion in the above link. Just follow the steps for configuration of the deploy job. Once you have your deploy job, if it is always run at the same time, just configure its Build periodically parameters. Alternatively, you can have yet another job that will trigger the deploy job based on whatever conditions you want.
In either case above, if your Default Selector is set to Latest successful build (as explained in the link above), the latest build will be pushed to Production
Not sure archiving artifacts is really a good idea. A staging repository might be better as it enables cross-functional teams to share artifacts across different builds when required by tweaking the Maven settings.xml file.
You really want a deployable (ear/war) as the thing that gets built, tested, then promoted to production once confidence is high with the build.
Use a build number on your deployable (major.minor.buildnumber). This is the thing you promote to production, providing your tests can be relied upon. Don't use a hyphen to separate minor with build number as that forces Maven to perform a lexical comparison... a decimal point will force a numeric comparison which will give you far less headaches.
Also, you didn't mention your target platform, but using the Maven APT/RPM plugin to push an APT/RPM to a APT/YUM repo that's available to a production box (AFTER successful testing!) would be a good fit, as per industry standards?