I have Project A,B and C. I want to configure Teamcity in such a way that, when I run Project B build , it should first run Project A , Project B and at the end Project C. How should I configure?
Order::
I will run Project B, but it should wait until Project A is built.
Then,
Project B will run.
Finally, Project C will be kicked off. How can I achieve this. Thank you.
Give project B a snapshot dependency on project A.
Give project C a finished build trigger on project B.
When you manually run B it will first queue A due to the dependency. If A and B both succeed, the trigger will then queue C.
Related
I’m trying to trigger a pipeline job in Project B from another in Project A. Is this allowed?
Is it possible to use Pipelines of Pipelines feature for the same?
Yes, it's possible using the TriggerPipeline step, provided the project integration has permission to trigger that pipeline.
Reference: https://www.jfrog.com/confluence/display/JFROG/TriggerPipeline
This step takes projectKey as a configuration, so from project A you can trigger a step in project B
I have two builds configured such that one is supposed to trigger another on a successful run.
I have created a Build Config A, and a build config B that has a Finish Build Trigger linked to build A. Both A and B are very simple test builds having only a single command line build step echoing "Success", so that they will always succeed. Neither of these builds are part of build chains nor do they have any other snapshot dependencies or steps. Build A is finishing successfully but is not triggering Build B. What could be the cause of this?
Firstly, Finish Build Triggers should be avoided for two reasons: 1) they are confusing (hence this question), 2) they work backwards compared to how TeamCity usually works.
A Finish Build Trigger triggers another build. In your example, if you run Build B successfully, the Finish Build Trigger of Build B will trigger a new Build A build. To avoid this confusing configuration, I strongly urge you to use Snapshot Dependencies whenever possible. A Snapshot Dependency configured in Build B pointing to Build A (that is, you set up a dependency to A from B) will work as you seem to want to configure the builds to work in the above example, that is when you start a Build B, Build A will run first and foremost.
Our repositories has folders, the code in the folder are sometimes dependent on code in other folders, but only in one direction. For way of explanation:
C depends on B
B depends on A
We have 3 builds required on our Pull Request policy for master:
We have a build (BuildC) that builds ONLY folder C
We have a build (BuildB) that builds B and C
We have a build (BuildA) that builds A, B, and C
The policy specifies:
Changes in folder C require BuildC
Changes in Folder B require BuildB
Changes in Folder A require BuildA
Desired effect:
Depending on the case, I want the Pull Request to require ONLY ONE of the three builds. Here are the cases:
BuildA - Should run when there are changes in folder A (even if there are changes elsewhere)
BuildB - Should run when there are changes in B (and/or C) but NOT IN A. If there are changes in folder A, this build should NOT run
BuildC - Should run when the only changes are in folder C... if changes exist in folder A and/or B in addition to C... this build should not run.
What actually happens is that if you change something in folder A and C, two builds run: BuildA and BuildC... and if the changes in folder C depend on folder A, then BuildC build fails. In any case, the run of buildC is a waste.
Is there a way to have Azure DevOps queue only 1 build... but the best one. So in our example case, BuildA will run but not BuildC... but if the changes were only in Folder C, it would run Build C?
There is no way to accomplish what you want using build triggers or policies. There is no "Don't build when there are changes in folder X". There are a few options though, but they require a bit of rethinking:
Option 1: Use jobs & conditions
Create a single Pipeline with a build stage and 4 jobs.
The first job uses a commandline tool to detect which projects need to be rebuilt and sets an output variable
The other 3 jobs depend on the first job and have a condition set on them to only trigger when a variable (set in the first job) has a certain value.
That way you can take complete control over the build order of all 3 projects.
Option 2: Use an orchestration pipeline
There are extensions in the marketplace to trigger another build and wait for its result.
Perform a similar calculation as in option 1 and trigger the appropriate build, waiting for its result.
Option 3: Use Pipeline Artifacts
Instead of building A+B+C in build C, download the results from A+B, then build C. This will require uploading pipeline artefacts at the end of each job and for each subsequent job to do an incremental build by downloading these artifacts and thereby skipping the build process.
You could even download the "last successful" results in case you want to skip building the code.
Option 4: Use NuGet
Instead of pipeline artifacts, use nuget packages to publish the output from Build A and consume them in Build B. Or even, publish A in job A and consume it from job B in the same build definition.
Option 5: Rely on incremental builds
If you're running on a self-hosted agent, you can turn off the "Clean" option for your pipeline, in care the same agent has built your build before, if will simply re-use the build output of the previous run, in case none of the input files have changed (and you haven't made any incorrect msbuild customizations). It will essentially skip building A if msbuild can calculate it won't need to build A.
The advantage of a single build with multiple jobs is that you can specify the order of the jobs A, B, C and can control what happens in each job. The big disadvantage is that each job adds the overhead of fetching sources or downloading artifacts. You can optimize that a bit by clearly setting the wildcards for what pieces you want to publish and to restore.
If you don't need the sources in subsequent stages (and aren't using YAML pipelines), you can use my Don't Sync Sources task (even with Git) to skip the sync step, allowing you to take control over exactly what happens in each job.
Many of these options rely on you figuring out which projects contain changed files since the last successful build. You can use the git or tfvc commandline utilities to tell you which files were changed, but creating the perfect script may be a bit harder when you have build batching turned on, in which case multiple changes will trigger your build at once, so you can't just rely on the "latest changes". In that case you may need to ure the REST API to ask Azure DevOps al the commitIds or all changeset numbers associated with this build to do the proper diff to calculate which projects contain changes.
Long-term, relying on a single build with multiple jobs or nuget packages is likely going to be easier to maintain.
I have a build say "test" under build and releases which is common build or dependent build to other builds like "test1" and "test2". I want the "test" build to run first before every commit is made to other builds "test1" and "test2" so that they can get the required dependencies form "test". Can you please let me know how this is achievable in vsts?
Regards,
Shwetha
You can do it through Phase, simple steps:
If you are using TFVC, change Source to add common/dependent project mapping in Get sources.
If you are using Git: Select the Phase and check All Script to access OAuth token option. (Optional: Select Options tab and change Project collection in Build job authorization scope if the common/dependent project is in another team project)
Change Visual Studio build task to just build common/denpendent project in this phase
Add another agent phase to build other projects (e.g. test1, test 2)
You may try using an extension like Build Chain or Queue New Build or Parallel Builds. These extensions allows you to launch and wait for other builds in a Gated-checking Definition for test1 and test2 projects.
i am using jenkins for automation builds.
my issue is i want to download sources from svn and run the build steps and after running the builds steps once again i want to take latest sources from svn.
is there any plugin for it where my requirement satisfy.
Consider setting up two jobs (A and B) with a shared workspace (job > configure > Advanced Project Options ; click button Advanced...). check custom workspace and define a location). Once job A is finished it triggers job B and job B then performs a svn update plus whatever else you need. In order to avoid parallel execution of A and B, check Block build when upstream project is building and Block build when downstream project is building.
Maybe not a plugin, but you can always run manual SVN commands as part of the build step
Add a new build step to "Execute shell" (if on Linux) or "Execute Windows batch command"
(if on Windows).
Inside, write SVN commands, depending on your OS, for example:
svn up checkout_folder, note that path will be relative to Jenkin's workspace