How to manage PR validation builds for 100+ projects - azure-devops

We have 100+ services/apps in a repository in Azure Devops. We have defined a single CI/CD YAML multistage pipeline for each (build and deployment). This limits blast radius and allows for auditability of each release of each project. We rely on templates for all the real pipeline work so this is easy to maintain; just a small root azure-pipelines.yml file for each project that includes the needed templates.
Now, we'd like to start using PR validation builds. And, as best as I can tell, we have two options:
Create a separate PR build for for every project and use the UI/API for policies to create 100+ policies
Create a single PR build that has stages for all 100+ projects.
I'm not a fan of the 1st option as now we'll have 200+ builds. The 2nd option is possible, but to avoid a 3 hour PR build, we'd need a way to only run needed stages (aka project builds).
Is there a 3rd option I'm missing? If the 2nd option is our best bet, how do we turn off stages for projects not changed in that PR (i.e. what condition would we use)?
(FYI, our policy is to change only one project per PR, but there are, on occasion exceptions to that.)

For personal suggestion, I also recommend the second method. Though the build script would be very large in one configure file, but much better than have hundreds build configuration files.
But the difficulty is these 100+ apps are all in one repository. This means all the normal method will not suitable for you, include using Build.Repository.Name value as the stage condition. Also, there's no more details which describing the source file path stored in the commit.
So, I suggest you and your team developers input the project name info into your commit message. Then, in the build pipeline you could use the variable Build.SourceVersionMessage to get its comment message. Since this is a environment variable which only work in step level(Not work for stage level and the job level), it needs you add one task in the first step and use the condition for it.
The logic of it is add one step as the first one in every stages. This step is only used to conditional judgment. If the Build.SourceVersionMessage matches the prefix or any key contents words, the jobs will be early-exit.
If use the condition like this:
condition: startsWith(variables['Build.SourceVersionMessage'], '[maven-plugin]')
It needs your commit message must follow a strict content writing format, starting with the specified project name.
Another condition can for you consider is:
condition: in(variables['Build.SourceVersionMessage'], 'maven-plugin')
This does not need the strict content writing format, but also need input the project name in the commit message. Thus it could be evaluated in the job condition with the above script.
Hope it could give you some help.

Related

How to provide dynamic values for approvals and checks in yaml pipelines?

I'm working on an integration between Azure Pipelines and ServiceNow's change management module. To achieve that the ServiceNow Change Management extension has been installed and configured according to this documentation page. In Azure DevOps we are using multistage yaml pipelines, which should create standard preapproved changes in ServiceNow.
The connection itself between the two applications works fine, I managed to put together a pipeline that creates change requests, waits until their status changes and then closes them. However, I'd like to pass some values set in the pipeline runs to the created change requests and I couldn't find a way to do it.
First I added a service connection to our Azure DevOps project, and created the ServiceNow check for it. I experimented a little with adding different expressions to it, like setting the short description to ${{ parameters.shortDescription }}, or defining a variable in the pipeline as ShortDescription: ${{ parameters.shortDescription }} and using that variable in the check as $(ShortDescription) or $[ variables.ShortDescription ]. Unfortunately none of these expressions got resolved. I also realized it is possible to use the predefined variables, but the values I'd like to set are not possible to describe by predefined variables. For example, selecting an assignment group would be pretty straightforward from a parameter defined as a list, but impossible to select from predefined variables.
So as a next idea, I tried to link a variable group to the check and update the variables through logging commands. Even though the variables from the group got resolved, they only showed the values I set them through the UI as a static default value. The dynamic values set via the logging commands were not visible. I played around some time and verified that I can update the definition of the variable groups through Azure CLI or REST API, so I can add new variables or update existing ones. Thus I tried to add a new variable to the linked group during the pipeline run named as ShortDescription_$(Build.BuildId). Even though it got added properly, I could not use it within the check, because it required double variable resolution, like $(ShortDescription_$(Build.BuildId)) and this expression was not resolved, not even partly. It remained $(ShortDescription_$(Build.BuildId)).
Then I started thinking about using only one variable from the group with a static name (e.g. ShortDescription) for all pipeline runs. However, I feel it would create a race condition and could cause some inconsistencies.
So as a last resort, I tried to put together an extension with an Agent and a ServerGate task, which are capable of storing the values I want to pass to change request and reading the stored values in an agentless environment. The problem here is, that the second task is not visible as a check for service connections. It's there as a release pipeline gate and looks good there, but I can't utilize it that way. Based on a question I found, this does not seem to be the problem with my task. To verify it, I copied the content of the same ServiceNow check I used before, and added it to my extension as a contribution with a different task id. And it did not show up as the question stated.
Which means now I can either
create a change request through my custom server task (as the ServerGate task can be used properly in yaml if it is changed to a Server task), but that way I can't wait for the state change of the ServiceNow ticket, or
create the change request in a separate stage where I want to use it, update it first in the same stage where I created it via the first-party check and wait for the state change in the stage where I would normally create it.
The second can work, but it has its own problems, like having misleading values stored in the changed request for the stage id field, or not having multiple change requests created for multiple run attempts of the deployment stage. Also I feel like it's not how the extension's task and check should be used.
Unfortunately, I'm out of ideas how this dynamic value passing can be achieved, if it's possible to do so in the first place. Could you please help me by sharing ideas, or pointing out errors in my attempts?

How to run the same Azure DevOps pipeline with different library variables - without cloning?

I have a reasonably complex release pipeline in Azure DevOps that releases a number of Azure apps, a database etc.
Each step is genericised using a library variable for the environment. For example:
But library variables are linked to a release or a selection of stages.
Currently I have to clone the entire pipeline and link a new library variable group in the clone to publish a different environment, but this is heavy on unwanted duplication and maintenance.
How can I run the same release pipeline with different library variables?
If I could do this, it would be possible to have a release for a given branch, for example, but I cannot see a way to do it.
At of this time, it is not supported to select which variable groups to use when you create a release.
If you only have one or several variables, I think you can use pipeline variables instead of variable groups, so that you can update them at release time. Here are the detailed steps:
Go to your pipeline editing page and select "Variables" tab. Click "Add" to add a variable. Then check the option "Settable at release time".
Try re-create your release. You will find the variables defined in #1 and you can edit them before create the release.
If you have many variables, I suggest you try to change the structure of your pipeline to make it more suitable for deployment to multiple environments. As Daniel said, you can use stages for each environment, and then use the variable group in stages scope.

How to create single pipeline(s) for all projects in an organization

I have more than 35 projects in an organization in Azure DevOps. These are the solutions of a Single product. Right now I am creating build and release pipelines for each project one by one.
If I create a pipeline for a single project and release the build, I want devops to create pipeline & releases builds for all the other projects(in the same organization) together at once.
For eg, There are projects A,B,C & D. If I create a release pipeline for Project A, will Devops automatically Release build for other projects "B,C & D" in the same organization at the same time.
We need to avoid creating pipelines for each projects one by one. Is this possible and is there any scripting or configuration to achieve this?
Thanks in Advance.
The process of setting up multiple release definitions with the exact same tasks can be very time consuming and difficult to manage -- fortunately, our team solved this problem using Task Groups!
Task Groups allow you to bundle multiple steps into a single "group", parameterize them, and then invoke them like a single step. You can also edit the Task Group at any time which will cascade to all places where it's used.
The other consideration is that you can bundle your custom variables into Variable Groups and reuse them between Release Definitions. Having your configuration reusable also has the benefit of only having to edit a single item.
I'd recommend:
Create a single release definition as a golden reference example.
Add your build steps for a single stage/environment
Once you're happy with this release definition, multi-select the steps you want to bundle into a group, open the context menu and select the option to convert to a Task Group. This will remove the steps from your release and move them to the Task Group.
Customize the Task Group with the appropriate parameters and then save it.
Modify your release definition to use the Task group parameters.
Add the additional stages to your environment, and add your custom Task Group to each parameterizing with configuration specific to the environment.
Once you're happy with the Release Definition -- use the Clone feature to create 34 more instances.
The same approach can be applied to builds.

Persisting values across re-execution of release pipeline in Azure DeOops

I am configuring a release pipeline in Azure DevOps and I want the variables that get generated along the tasks to persist across re-execution of the same release. I wanted to know if that is possible.
The main goal is to create a pipeline that i can redeploy in case of a failure, if for example I have a release pipeline with 30 tasks, I would want handle skipping the tasks that got completed, but once I reach the relevant task, I need the persisted variable values.
I have looked online and I see it isn't possible to persist variables across phases, but does that also mean it cannot be persisted in the same release pipeline if I redeploy it?
From searching stack exchange and google I got to the following GitHub issue on the subject, I just wasn't sure if it also affects my own situation in the same way.
https://github.com/Microsoft/azure-pipelines-tasks/issues/4743
You have that by default unless I interpret you wrong. When redeploying the same release pipeline variables values you define (in the pipeline) do not change.
Calculated values are not persisted

Storing code metrics

I'd like to write a pre-commit hook that tells you if you've improved/worsened some code metric of a project (i.e. average function length). The hook would have to know what the previous average function length was and I don't know where to store that information. One option would be to store an additional .metrics file in the repo but that sounds clunky. Another option would be to git stash, compute the metrics, git stash pop, compute the metrics again and print the delta. I'm inclined to go with the latter. Are the any other solutions?
Disclaimer: I am author of the Metrix++ tool, which I am using in the workflow I described below. I guess the same workflow can be executed with other tools capable to compare the results.
One of the ideas you suggested works perfectly, if you add a couple of CI checks (see the steps below). I find it solid. Not sure why you are considering it clunky.
I have got a file with metrics results which is updated before each commit and stored in VCS. Let's name this file metrics.db, and consider automation of the following workflow on build/test of a project:
1) if metrics.db has not been changed since last checkout (i.e. it is the original data for the previous/base revision), copy it to metrics-prev.db
2) Collect metrics for current code, what produces metrics.db file again. Note: It is very helpful when a metrics tool can do iterative scans for the best performance (i.e. calculate metrics for updated functions/classes), so it gives you the opportunity to run metrics tool on every build, including iterative.
3) Compare metrics-prev.db with metrics.db. If metrics identify regressions, fail the build and [optionally] do not allow to commit - team rule. If metrics are good, build is successful, and commit may happen.
4) [optionally] you may run Continuous Integration (CI) which validates that the actual committed metrics.db file corresponds to the committed code for the same revision (i.e. do the same 1-3 steps and make sure that the diff is zero at the step 3). If diff is not zero, it means somebody forgot to update the metrics.db file, and presumably did not execute pre-commit check, so revert the change.
5) [optionally] CI may do steps 1-3 if you fetch metrics.db as metrics-prev.db from the previous revision. In this case, CI may also check that the collected metrics.db is the same as committed (alternative or addition for the step 4).
Another implementation I have seen: metrics.db files are stored in a separate drive, out of VCS, and custom script is able to locate corresponding metrics.db for a revision. I find this solution unreliable as the drive can disappear, files can be moved and renamed, and so on. So, placing the file in VCS is better solution, but any will work.
I have attempted to do the alternative you suggested: switch to the previous revision and run the metrics tool twice. I abandoned this approach for several reasons: metrics check script alters your source files (so, it is impossible to include it into iterative rebuild and continue to work smoothly with your IDE as it will complain about changed files), and secondly it is very slow performance (comparing with iterative re-scans, it is extremely slow).
Hope it helps.