Can a workflow be triggered from a post-build action of a freestyle project? - jenkins-workflow

I have a Jenkins Workflow which I am able to run by clicking Build. But when I try to start it from the Build other projects post-build action of an (freestyle) project, I just get an error in the form
my-flow is not buildable
and the downstream flow is not run when the upstream project is built.

The post-build action Build other projects does not simply do what it sounds like: build those projects when the step is run. In fact running the step does nothing at all. Instead, it causes the named projects to be included in an edge of the dependency graph, and downstream projects according to the graph are then run according to separate logic. And currently the dependency graph API is defined in such a way that Workflow jobs cannot participate. Long story short, that mode does not work.
The Parameterized Trigger plugin offers other ways to start downstream jobs. The nonblocking trigger works much like the Jenkins core trigger: it affects the dependency graph. There is also a blocking trigger (which is a build step, not a post-build action), which just does what you probably expected: start the downstream build (much like the build step in Workflow). Currently this plugin does not support Workflow, though it would probably be easy to make it use more current APIs so that it would: JENKINS-26050
What does work is to configure the relationship in the reverse direction: in the Workflow job configuration, select Build after other projects are built and select your freestyle project. Now when the freestyle project finishes building, the Workflow job is triggered.

Related

Can I delete a Github workflow run within the workflow itself?

We developed a CI system that runs via Github actions on self-hosted runners.
In order to prevent building unnecessarily, we incorporated a pre-build job that triggers a build only if the commit message has [trigger ci] anywhere.
Now we are facing a slight annoyance: Even if the build job doesn't run, we still have a workflow run show up in the workflow runs list, as expected, but it has basically become a copy of the commit history and very hard to navigate or find which workflows did actually build.
Is there any way that I can delete a workflow run from the workflow runs list conditionally, within the workflow itself? This way, we can keep only the runs that did trigger a build, and delete the ones that skipped building.

How to run an ad-hoc clean build in Azure DevOps?

I am new to using Azure DevOps builds/pipelines, as the source code for the solutions I need to build are in TFVC I am limited to using the Classic (i.e. UI) builds rather than YAML.
When I want to test changes to a build definition I sometimes want to run a clean build, i.e. ensure that sources and artifacts from earlier builds are removed before the new build run, yet leave normal builds (i.e. ones triggered by changes in TFVC) to be incremental so to make builds faster.
I am used to TeamCity which has a plethora of options with regards to managing source and artifacts retention between builds, including a simple "clean" check box when triggering a manual build.
ADO Builds seems very limited in this regards and if I want to have a clean build it seems the only option is to change the build definition, select clean, run the build, then change the build definition again to remove the clean option.
Are there better ways to manage "ad-hoc" clean sources and artifacts in ADO Builds?
Those settings are either on/off. They wouldn't accept conditional run-time variables.
That being said, you might try leveraging the "Save as draft" option. It seems to create a DRAFT pipeline definition you could execute for your changes.
You could probably just flip it back to no clean before publishing. I don't really use that feature, but I am going to guess on the back-end it is using a different temporary definition id. That will probably mean on the build agent a new folder gets created under _work. If that is the case, you probably wouldn't even need to flip the clean sources since it will not exist the first run. It also probably means if this a self-hosted agent you will have doubled the work folder size and you might have to manage that clean-up after you are done.
If it does create the second work folder, this is probably preferable as it means you won't break the incremental build on the build directly following your test with clean. Whether you did that ad-hoc or through editing the build definition.
Build.Clean variable is deprecated, you can only use Clean option to clean local repo on the agent currently.
I'd like suggest you submit a user voice at website below, product team will evaluate it carefully:
https://developercommunity.visualstudio.com/content/idea/post.html?space=21
One workaround is adding a Post Build Cleanup task in the end of your pipeline, when you want to run builds incrementally, you can disable this task.

Triggering a build on completion of another build when a Pull Request comes in

We have a web application in an Azure DevOps repo and there's a branch policy on the master branch that kicks off a build when a pull request is created. This validates that it compiles and performs code quality checks and the like.
We also have some integration tests (using Mocha and Selenium) that live in another repo. I would like to run the integration tests when a PR against master is created.
As far as I know I cannot have the same build pull from two different repos (without using extensions and it seems cleaner to me to have two separe builds anyway). So I thought I would have another build just to run the integration tests. The build that pulls from the webapp repo would have a final step where it would deploy to an integration tests environment and then the second build would get the latest version of the integration tests and run them against the integration tests environment. I created a Build Completion trigger on the integration tests build that is triggered by the completion of the webapp build.
The problem is that when I queue the webapp build manually, it will launch the integration tests build when done. But when the webapp build is queued by an incoming PR, the integration tests build does not get triggered.
Is this a bug in Azure DevOps or am I going about this wrong?
Also in my side builds from PR doesn't trigger another builds (with Build Completion trigger), I don't know if it's a bug or it's by design.
Anyway, there is a workaround - the final step in the first build will trigger the second build. how? with Trigger Build task.
You just need to change the branch because it will be a merge branch from the PR that doesn't exist in the tests repository:
You can also do it without install extensions with PowerShell task and the Rest API.

Need to add reference from one VSTS Build task to another task

I am working on creating custom vsts build task using Hosted agent and powershell script. I just want to add reference of existing task available out of the box in vsts (Publish Build Artifacts). Is there any way to reference this task in our custom task? Or I just have to implement the functionality provided by PublishBuildArtifacts manually?
The way to reference another task us to grab the sources from the Task Repository on GitHub and package them with your own task (in a subfolder). You'll need to copy the inputs from their task.json and merge them into yours if you want to allow other users to configure the fields exactly the same way as the other task is doing.
You can find the task implementations here: https://github.com/Microsoft/vsts-tasks make sure you select the right branch, the master branch is the bleeding edge, and it may contain a version of the tasks that is not fully battle tested or may not be compatible with the latest version of the agent that has been released (or the minimal agent version you are targeting).
Or you can grab the implementation from a build agent's tasks directory.
Remember that for certain functionality, the VSTS Task SDK has built-in methods to upload artefact, which may make your life easier if you decide to implement the functionality on your own.
The team that built the agent has been pretty specific about making sure that tasks are self-contained and need to package their own dependencies or flag a demand. This is to ensure that each task can evolve and change independently.

Build Flow vs Build Pipeline

I'm trying to split up a few Jenkins jobs using the Build Flow plugin so that instead of three monolithic jobs, we have three "starting points" that then use the DSL to trigger downstream jobs. I chose Build Flow over the Build Pipeline plugin because it seemed like it was a lot harder to share jobs between different pipelines ( ie, sharing the workspace of the multiple starting jobs with a single compile job ).
Previously, I had three jobs set up: Project-PR, Project-DEV, and Project-PROD.
Project-PR would build whenever a pull request happened in GitHub, and would just run a smaller subset of our unit tests, so that we could get quick verification that the PR is okay to merge.
Project-DEV would build whenever a feature branch was merged in GitHub into the main development branch, as well as having the ability to be manually triggered and given a different branch to pull. It would run the full suite of unit -- basically a sanity check that everything is still good. Then it would compile and minify, and push to a QA environment for testing, and then it would run the full suite of integration tests against that QA environment. This step was configured as a parametrized build, with the parameter being the name of the branch to pull, test, and push. It would push to and set up QA environment specific to that branch, so that we could QA multiple features without having to merge to development ( ie, feature-one.qa.example.com, feature-two.qa.example.com ).
Project-PROD would only ever be manually triggered, and would do the full unit and integration test suite, compile and minify the front-end code ( Less, JS, and CSS ), and push the built code into a special "release branch" in GitHub that can then be deployed -- we haven't quite reached the point of Jenkins being in charge of deployment.
Now, what I wanted to set up was to split the subtasks into their own jobs, so that it'd be easy to set up new jobs to without having to copy and paste all the build steps ( or copying the job and changing all the things that need to be unique ). This would let us do things like create a copy of the Project-DEV, but switch out the last job for one that deploys to a staging environment set up in the cloud. Or easily create a job that could report test results to a third party source, ie copy the results to a shared network folder or something. Or any number of things. The goal is basically to use these subtask jobs as building blocks to let us build more complicated jobs, while also making it easier to update how one portion of the build works ( for example, maybe we switch to a different technology for compiling, which might change how Jenkins would compile the code ).
For example, the Project-PR would be split into the following:
Project-PULL -> Project-SetupBuildEnv -> Project-PartialUnitTests
(BuildFlow) (Normal Job) (Normal Job)
The SetupBuildEnv would just pull down any NPM or Composer requirements, and set up the directories required for testing and building. PartialUnitTests then run, and report it's results back up to the
The Project-DEV could be split up like so:
Project-DEV -> Project->SetupBuildEnv -> Project-FullUnitTests -> Project-Compile -> Project-Minify -> Project-DeployQA -> Project-FullIntegrationTests
This way, the parts of the build process that are shared ( in this case, Project-SetupBuildEnv ) can be easily shared between jobs, reducing duplication, and making it easier to update a step in the build process without having to remember EVERY job that uses that step.
Right now, I'm using the Shared Workspace plugin so that all the steps use the same workspace. However, I'm running into an issue with that: it's not actually using one workspace. What's happening is that the Build Flow job will get a directory ( eg: /sharedspace/shared_one ), and download the code from GitHub into there. Then it will trigger the DSL, which starts up the 'SetupBuildEnv' job. But instead of working inside the same directory, it will get a directory with a name like "/sharedspace/shared_one#2", and run the build setup task in there. Then when it goes to do the third step ( unit testing ), it fails, because now it's got a third directory ( /sharedspace/shared_one#3 ), but that directory didn't have the setup run, so the required node and composer modules are missing. What's weird is that it looks like the Shared Workspace plugin is copying the first shared workspace to another directory and incrementing a counter ( the #N part of the directory name ) and giving that to the other jobs to work in.
So, question time:
is there a way to fix the Shared Workspace plugin so that it's actually only using one directory for each job?
if not, is it possible to have the Clone Workspace plugin take an argument, so I can specify what archived workspace to use instead of using the dropdown?
another possiblity: would using the shared workspace plugin, but use the "Local subdirectory for repo (optional)" option in the advanced git job options to specify the directory to use?
failing all that, is there some other way to set up a build pipeline that can share jobs with other pipelines that I've missed out on?
In my experience, even if you do get this working, this might not be a scalable way to go longer term. We've found the shared workspace plugin entirely a bad idea for long / complex builds (similar reasons to yours - but also: scaling across dozens of slaves becomes hard suddenly). Arguably the idea is slightly against the spirit of modern scalable CI.
I'd instead delegate more to your build tools, be they Maven / Gradle, Ant, even Grunt, whatever. If you want to keep these builds truly modular, but can't afford to rebuild at each step (we decided full independence was worth wasting a few minutes per build) then perhaps look at creating useful artefacts at key stages - in your case, minified assets TARs, library JARs, or maybe webjars, or whatever, and deploy them to a (Maven?) repo.
Later build steps in your pipeline can quickly, easily, and repeatably pull the latest (or named version) assets from this centralised repo, and continue with the build process.
An alternative (with similarities) is to build one or more assets, but only promote them after increasing numbers of tests are run, which can be done in separate builds coordinated by your build flow, using the Promoted Builds plugin etc.