I have two builds configured such that one is supposed to trigger another on a successful run.
I have created a Build Config A, and a build config B that has a Finish Build Trigger linked to build A. Both A and B are very simple test builds having only a single command line build step echoing "Success", so that they will always succeed. Neither of these builds are part of build chains nor do they have any other snapshot dependencies or steps. Build A is finishing successfully but is not triggering Build B. What could be the cause of this?
Firstly, Finish Build Triggers should be avoided for two reasons: 1) they are confusing (hence this question), 2) they work backwards compared to how TeamCity usually works.
A Finish Build Trigger triggers another build. In your example, if you run Build B successfully, the Finish Build Trigger of Build B will trigger a new Build A build. To avoid this confusing configuration, I strongly urge you to use Snapshot Dependencies whenever possible. A Snapshot Dependency configured in Build B pointing to Build A (that is, you set up a dependency to A from B) will work as you seem to want to configure the builds to work in the above example, that is when you start a Build B, Build A will run first and foremost.
Related
Our repositories has folders, the code in the folder are sometimes dependent on code in other folders, but only in one direction. For way of explanation:
C depends on B
B depends on A
We have 3 builds required on our Pull Request policy for master:
We have a build (BuildC) that builds ONLY folder C
We have a build (BuildB) that builds B and C
We have a build (BuildA) that builds A, B, and C
The policy specifies:
Changes in folder C require BuildC
Changes in Folder B require BuildB
Changes in Folder A require BuildA
Desired effect:
Depending on the case, I want the Pull Request to require ONLY ONE of the three builds. Here are the cases:
BuildA - Should run when there are changes in folder A (even if there are changes elsewhere)
BuildB - Should run when there are changes in B (and/or C) but NOT IN A. If there are changes in folder A, this build should NOT run
BuildC - Should run when the only changes are in folder C... if changes exist in folder A and/or B in addition to C... this build should not run.
What actually happens is that if you change something in folder A and C, two builds run: BuildA and BuildC... and if the changes in folder C depend on folder A, then BuildC build fails. In any case, the run of buildC is a waste.
Is there a way to have Azure DevOps queue only 1 build... but the best one. So in our example case, BuildA will run but not BuildC... but if the changes were only in Folder C, it would run Build C?
There is no way to accomplish what you want using build triggers or policies. There is no "Don't build when there are changes in folder X". There are a few options though, but they require a bit of rethinking:
Option 1: Use jobs & conditions
Create a single Pipeline with a build stage and 4 jobs.
The first job uses a commandline tool to detect which projects need to be rebuilt and sets an output variable
The other 3 jobs depend on the first job and have a condition set on them to only trigger when a variable (set in the first job) has a certain value.
That way you can take complete control over the build order of all 3 projects.
Option 2: Use an orchestration pipeline
There are extensions in the marketplace to trigger another build and wait for its result.
Perform a similar calculation as in option 1 and trigger the appropriate build, waiting for its result.
Option 3: Use Pipeline Artifacts
Instead of building A+B+C in build C, download the results from A+B, then build C. This will require uploading pipeline artefacts at the end of each job and for each subsequent job to do an incremental build by downloading these artifacts and thereby skipping the build process.
You could even download the "last successful" results in case you want to skip building the code.
Option 4: Use NuGet
Instead of pipeline artifacts, use nuget packages to publish the output from Build A and consume them in Build B. Or even, publish A in job A and consume it from job B in the same build definition.
Option 5: Rely on incremental builds
If you're running on a self-hosted agent, you can turn off the "Clean" option for your pipeline, in care the same agent has built your build before, if will simply re-use the build output of the previous run, in case none of the input files have changed (and you haven't made any incorrect msbuild customizations). It will essentially skip building A if msbuild can calculate it won't need to build A.
The advantage of a single build with multiple jobs is that you can specify the order of the jobs A, B, C and can control what happens in each job. The big disadvantage is that each job adds the overhead of fetching sources or downloading artifacts. You can optimize that a bit by clearly setting the wildcards for what pieces you want to publish and to restore.
If you don't need the sources in subsequent stages (and aren't using YAML pipelines), you can use my Don't Sync Sources task (even with Git) to skip the sync step, allowing you to take control over exactly what happens in each job.
Many of these options rely on you figuring out which projects contain changed files since the last successful build. You can use the git or tfvc commandline utilities to tell you which files were changed, but creating the perfect script may be a bit harder when you have build batching turned on, in which case multiple changes will trigger your build at once, so you can't just rely on the "latest changes". In that case you may need to ure the REST API to ask Azure DevOps al the commitIds or all changeset numbers associated with this build to do the proper diff to calculate which projects contain changes.
Long-term, relying on a single build with multiple jobs or nuget packages is likely going to be easier to maintain.
I am new to using Azure DevOps builds/pipelines, as the source code for the solutions I need to build are in TFVC I am limited to using the Classic (i.e. UI) builds rather than YAML.
When I want to test changes to a build definition I sometimes want to run a clean build, i.e. ensure that sources and artifacts from earlier builds are removed before the new build run, yet leave normal builds (i.e. ones triggered by changes in TFVC) to be incremental so to make builds faster.
I am used to TeamCity which has a plethora of options with regards to managing source and artifacts retention between builds, including a simple "clean" check box when triggering a manual build.
ADO Builds seems very limited in this regards and if I want to have a clean build it seems the only option is to change the build definition, select clean, run the build, then change the build definition again to remove the clean option.
Are there better ways to manage "ad-hoc" clean sources and artifacts in ADO Builds?
Those settings are either on/off. They wouldn't accept conditional run-time variables.
That being said, you might try leveraging the "Save as draft" option. It seems to create a DRAFT pipeline definition you could execute for your changes.
You could probably just flip it back to no clean before publishing. I don't really use that feature, but I am going to guess on the back-end it is using a different temporary definition id. That will probably mean on the build agent a new folder gets created under _work. If that is the case, you probably wouldn't even need to flip the clean sources since it will not exist the first run. It also probably means if this a self-hosted agent you will have doubled the work folder size and you might have to manage that clean-up after you are done.
If it does create the second work folder, this is probably preferable as it means you won't break the incremental build on the build directly following your test with clean. Whether you did that ad-hoc or through editing the build definition.
Build.Clean variable is deprecated, you can only use Clean option to clean local repo on the agent currently.
I'd like suggest you submit a user voice at website below, product team will evaluate it carefully:
https://developercommunity.visualstudio.com/content/idea/post.html?space=21
One workaround is adding a Post Build Cleanup task in the end of your pipeline, when you want to run builds incrementally, you can disable this task.
I'm having a hard time figuring out how to correctly deploy to different environments with TeamCity (in terms of cross BuildConfiguration dependencies) and hope to get some input as to how to configure my SubProjects/BuildConfigurations properly. Lets start based on a concrete example: I made this test "TeamcityConfigurationTests" to better learn how TeamCity handled dependencies, and the current state shows the result i am looking for:
I have 3 subProjects, Dev, Test and Prod - and all associated tasks for those "environments" as seperate build configurations within that subProject. This is to more clearly visualize what is going on, and if anything breaks, to be able to see immediately what is broken (separate Build, UnitTest and DeployToDev BuildConfigurations, rather than 3 different steps in one single Build Configuration).
Ideally, i only want to build my application once in the Dev.Build step, and let the Dev.UnitTest and Dev.DeployToDev steps grab that artifact and run tests and deploy. That i got going for me, by having snapshot and artifact dependencies. But i am having trouble getting the correct artifact when i want to deploy from Dev -> Test or Test -> Prod.
My issue is to correctly reference the latest successfully DEV deployed artifact when running Test.DeployToTest - and the same for getting the latest successfully TEST deployed artifact when running Prod.DeployToProd. (Essentially i want to promote the artifact to the next environment).
Now, my issue is, if I in the Test.DeployToTest have a SnapshotDependency to Dev.DeployToDev and an artifact dependency to Dev.Build, and the VCS source has changed since Deploy to Dev has run, it triggeres running all the DEV steps again. This is not the worst part, the same happens when i run Prod.DeployToProd if the VCS source changed since the initial build on dev (because of all the snapshot dependencies). Meaning, that rather than promoting Test -> Prod, I Build and deploy whatever is currently on VCS to Dev, Test AND Prod.
How am i supposed to set this up correctly?
The only other option i am aware of, is letting Dev.DeployToDev also publish the same artifact, and only have an (LatestSuccessful) ArtifactDependency in Test.DeployToTest. I would also have to publish the artifact again in Test.DeployToTest, for letting Prod.DeployToProd only have a (LatestSuccessfull) artifact dependency to Test.DeployToTest. (This would be to get rid of the SnapshotDependencies causing previous environments to run build/deploy again in case of VCS changes). But then i am publishing the artifact 3 times, rather than just the one time when the application is originally built in DEV - which i would like to avoid. Also, i have cases where no artifact is needed for deploying to Test and Prod, so there is no artifact to depend on (essentially i only need the BuildNumber from the "Dependent" environment i want to promote from).
I hope for some input. Thank you
Regards
Frederik
For anyone wondering, i made a JetBrains support ticket, and got the following response:
Basically, there are options to resolve your case:
Option 1: use "Promote" action form the build's Actions top-right menu
(or change the type of the Deploy* configurations to deployment and
use the action from the block on the build results. This is the
preferred way: before deploying you select the build to deploy and
"promote" it to the next environment. There is also an experimental
hidden feature to hide the "Run" button: add
"teamcity.ui.runButton.caption" configuration parameter in the build
configurations to empty value.
Option 2: do not use snapshot dependency, use only artifact dependency
on the latest successful build. However, when you run the build you
cannot be sure that the last successful build you see will be
deployed: while the build is standing int he queue, another
Dev.DeployToDev can finish and then be deployed as the last
successful.
We went with option 1
I have a Jenkins Workflow which I am able to run by clicking Build. But when I try to start it from the Build other projects post-build action of an (freestyle) project, I just get an error in the form
my-flow is not buildable
and the downstream flow is not run when the upstream project is built.
The post-build action Build other projects does not simply do what it sounds like: build those projects when the step is run. In fact running the step does nothing at all. Instead, it causes the named projects to be included in an edge of the dependency graph, and downstream projects according to the graph are then run according to separate logic. And currently the dependency graph API is defined in such a way that Workflow jobs cannot participate. Long story short, that mode does not work.
The Parameterized Trigger plugin offers other ways to start downstream jobs. The nonblocking trigger works much like the Jenkins core trigger: it affects the dependency graph. There is also a blocking trigger (which is a build step, not a post-build action), which just does what you probably expected: start the downstream build (much like the build step in Workflow). Currently this plugin does not support Workflow, though it would probably be easy to make it use more current APIs so that it would: JENKINS-26050
What does work is to configure the relationship in the reverse direction: in the Workflow job configuration, select Build after other projects are built and select your freestyle project. Now when the freestyle project finishes building, the Workflow job is triggered.
I have TeamCity set up to build Github pull requests as per these instructions: http://blog.jetbrains.com/teamcity/2013/02/automatically-building-pull-requests-from-github-with-teamcity/
I have added a VCS build trigger so that TeamCity polls Github looking for changes. This has no special settings enabled.
My build involves a shell script to set up dependencies and an Ant script to run PHPUnit. Right now, the Ant script fails (tests don't pass) with exit code 1. The build shows as a fail, and that should be that. However, every time the VCS build trigger looks for changes, it seems to find some, even though there have been no more commits. It then runs yet another build of the same merge commit and keeps repeating the build endlessly.
Why is it constantly thinking that there are changes when there are not?