Running a build pipeline AFTER my code has been merged? - azure-devops

I'll do my best to explain my problem.
In standard practice, I have an Azure Devops pipeline that creates a Terraform payload, invokes Terraform API, and lets Terraform do its deployment based off the payload. I do this by "Build Validation" - whenever something is PR'd into my branch, the pipeline runs to make sure I'm deploying proper Terraform infrastructure, and in the process, deploys said resources if the pipeline runs succeeds.
Meaning, the current workflow is:
Incoming PR -> Build Valdiation starts -> Pipeline runs -> Pipeline run succeeds -> Accept the PR and do a merge
However, the team I'm working with now wants the following:
Incoming PR -> Accept the PR and do a merge -> Build Validation starts -> Pipeline runs -> Pipeline run succeeds
Basically, they want to actually review the incoming PR, accept and merge it, and ONLY THEN have the actual pipeline/deployment process start. And I'm not sure how to perform this step. Looking into CI triggers, I couldn't find what I need. Any help appreciated.

As you said, you need to use the CI trigger.
Assuming they merge is to master branch and you want to run the pipeline after the merge add to the yaml the trigger:
trigger:
- master

I was looking for the same earlier. Unfortunately, azure does not offer anything like this.
I think the easiest solution is to set up a protected intermediate branch. E.g. pre-master and then make pr's towards this one and disallow humanly issued master merges. Then, as proposed by others, set a trigger on pre-merge after which you then commit to the master.
You can then complete the ping-pong by defining a trigger on master that aligns pre-master afterwards.

Related

Is it possible to make the pass/fail status of an Azure pipeline dependent of the pass/fail status of a downstream pipeline?

I'm currently migrating code from Gitlab to Azure DevOps and Azure Pipelines. In Gitlab I have a two repo setup like this:
Pipeline A in repo 1 runs build & UTs. It triggers:
Pipeline B in repo 2 runs solution tests. If it passes:
Pipeline A publishes artifacts
You can do this in Gitlab using a depend strategy in your trigger job. This means that if Pipeline B fails, Pipeline A also fails, and so the publishing stage in Pipeline A is skipped.
Am I right in thinking that this setup is not natively supported in Azure Pipelines, e.g. with pipeline completion triggers? I don't think you can have one pipeline pausing half way through and waiting for another, or have the upstream pipeline's pass/fail status mirror that of the downstream pipeline.
If so, what do you think would be a good solution here? Is it possible to gate PR Build Validation on the downstream pipeline, so if pipeline A passed on repo 1's branch X but pipeline B failed, Azure wouldn't let me merge branch X?
I've had a couple of ideas for hacky workarounds if this isn't supported:
Write a script in pipeline A that kicks off pipeline B, sleeps and periodically polls the API to check whether pipeline B has
passed/failed
Checkout repo 2's code during pipeline A, and run the solution tests there
Do either of them sound sensible?
Is it possible to make the pass/fail status of an Azure pipeline
dependent of the pass/fail status of a downstream pipeline?
There is no such out of box way to achieve this at this moment.
Personally, your ideas are sound. I have encountered similar requests before, but it was about release, but the concept of their implementation is the same.
The main idea is:
Add a powershell task to Pipeline A to call the REST API to queue pipeline B, 2. Cyclically monitor the construction results of pipeline B.
After getting the result of pipeline B, call the log command according to the result of pipeline B.
To set the construction result of pipeline A.
You could checkmy previous thread for some more details:
Is it possible to run a "final stage" in Azure Pipelines without knowing how many stages there are in total?
And the log command set the build result:
##vso[task.complete result=Failed;]DONE

Multi job pipeline always checks out same commit?

I have defined a multi job azure pipeline where every job needs to clone and checkout the source git repository. Some of these jobs can take a while so I am wondering if every job always clones and checks out the momentary HEAD version/commit of the Git branch or the pipeline trigger commit is remembered by the pipeline and then used for every job in order to have a consistent pipeline run.
I am pretty sure resp. can only hope the 2nd to be the case (and never saw anything else) but I am wondering if someone can point me to some Azure docs that officially confirm it.
When a pipeline is queued the ref is set. All jobs in the pipeline will checkout that reference. It's one of the reasons why the agent checks out the ref specifically and leaves the repo in detached HEAD state.
I can't find a doc that explicitly explains this, but that's how it works.
In a build pipeline (both YAML and Classic), by default each job in the pipeline will check out the commit version which triggered the current pipeline run.
In YAML pipeline, if you do not want a job to check out the commit ref, you can add the following step as the first one in the job.
steps:
- checkout: none
. . .
Both #Jesse and #Bright are right
There's a pipeline decorator that is evaluated on the first stage that looks for the presence of the checkout task. If the task is not found, it is dynamically inserted as the first step in the job.
You can see the pipeline decorator if you look at the top-level logs of the stage and expand the Job preparation parameters:
By specifying the checkout task with different settings, you can prevent this task from being injected:
steps:
- checkout: none

Prevent Cancellation of Deployment Job on PR Changes

We have an Azure DevOps YAML multi-stage pipeline where code is built and then deployed to a sequence of environments. Deployment is achieved using Terraform.
i.e.
Builds -> Test -> Deploy to DEV -> Deploy to TEST - - ->
This pipeline is used for both CI/CD builds and also PR builds. For the PR build, the only part of the deployment stage is running terraform plan on the TF scripts.
For PRs, the pipeline is configured to cancel the ongoing build if changes are pushed to that PR.
The problem we're seeing is that when changes are pushed to the PR and the ongoing build is cancelled, sometimes that cancellation happens during the terraform plan step. This occasionally means that the blob lease taken by terraform plan is not released. From that point onwards, manual intervention is required (break the blob lease) in order for the deployment stages to run successfully.
I believe we can switch off the setting which causes the ongoing PR build to be cancelled if changes are pushed.
But I wondered if there was a way of marking a pipeline step as critical - i.e. it should run to completion if the build is cancelled?
There are other ways of cancelling a pipeline build and there must be other tasks/steps which should not be cancelled part-way through. Such a critical-task setting would cover these situations too.
Not sure if you solved this, but I had the exact same issue. Adding condition: always() to my task forced it to complete when DevOps cancelled the pipeline after additional changes were pushed.
See https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml:
jobs:
- job: Foo
steps:
- script: echo Hello!
condition: always() # this step will always run, even if the pipeline is canceled
I'm afraid you won't be able to achieve this using just YAML. What you can do will require some additional effort (and in some cases quite big):
replace terraform script with bicep for isntance - for me syntax iq quite similar and what you get here is lack of terrafrom state
add in your very first step a check if your state is locked and break a lease if it is needed
I understood that you would like to hear something better than this but at the moment there is no way to mark a task as non-cancellable. However, this sound like a cool feature and candidate for feature request.

Azure Devops YAML Pipeline - Clean up dynamic deployments

Our current pipeline deploys a new instance of our app for every new branch created on azure repos, just like review apps on Heroku or Gitlab.
The creation part went smooth, but I'm not sure what to do with the orphaned resources and deployment once the branch is deleted (hopefully by an accepted pr).
Deleting them manually is not an option and there I can't find a trigger in the documentation for branch deletes.
The only option I can see right now is to create a scheduled job for the master branch(since it always exists) with a bash script that compares the list of deployed apps and existing branches and cleans up the resources.
Is it my only option, or is there another way without a fairly complex, all-access, destroy machine?
So did a little investigation dumping all enviroment vars to Notepad++ and using the compare plugin i realized that when a PR is accepted 2 env variables are different.
First of the initial variable "BUILD_REASON" during a push is set to "IndividualCI", but with the "BUILD_SOURCEBRANCH" set to "refs/heads/feature/******". When a pull request is initiated the "BUILD_REASON" changes to "pullrequest" and "BUILD_SOURCEBRANCH" to "refs/pull/***".
Finally when a PR is accepted the variables change to "BUILD_REASON" = "IndividualCI" and "BUILD_SOURCEBRANCH" = "refs/heads/master".
Once i figured out this i could create stage that have the following conditions:
- stage: CleanUp
displayName: 'CleanUp'
dependsOn: Test
condition: and(succeeded(), in(variables['Build.Reason'], 'IndividualCI'),in(variables['Build.SourceBranchName'], 'master'))
The above stage will be triggered when PR is accepted so to cleanup resources created during PR :-) havnt tested all the way but seems to do the job.
You can use a webhook in Azure DevOps to watch the pull request for updates. When the pull request status changes to completed, fire a script that deletes the resources used for the PR.

Is there a way to make an Azure DevOps release only publish the actual latest change from a build pipeline?

I have a situation where two commits were merged to master (e.g. FIRST and SECOND) very close together (seconds apart). Both triggered the build pipeline: FIRST triggered the pipeline first and SECOND triggered it second (the builds ran in parallel). For whatever reason, the build pipeline for commit SECOND finished first, and 30 seconds later the build for commit FIRST finished.
My automatic release pipeline is configured to always get the "latest" artifact from the build pipeline. The sequence of events described above caused the SECOND change to be deployed first, and then the FIRST change was deployed next (since its pipeline finished second) and stomped on the prior release, effectively deploying old bits to the service.
Is there any way to prevent this situation? Even if a build pipeline finishes second for intermittent reasons, I don't want a release to stomp over a more recent change that happened to finish earlier.
EDIT: Thank you to those who suggested/supported the idea of batching builds but that's not an option I'm looking to enable. I still want each commit to trigger its own build (to enable easier assignment of build break cause). I'm just looking for the releases to trigger in the order of commits, not the order of builds finishing.
Thanks!
You can set batch to true in triggers, so the system waits until the build is completed. Set "Batch changes while a build is in progress" option to true in Triggers for Build Pipeline at Azure DevOps or in YAML:
trigger:
batch: true
If you use Pull request, there should be no issues as new push should cancel in-progress run. Check autoCancel in PR triggers
You may need to make the pipelines to run on the same agent. So that the newest queue will wait for the previous queue to complete.
You can follow below steps to confine your pipeline to one agent.
1, Add a custom capability to the agent you want to run the pipeline(project settings->agent pools(select an agent pool)->agents(select a agent)->capabilities)
2,Add a demand to your pipeline : # this works for both microsoft-hosted agents and self-hosted agents
I tested and found microsoft-hosted agent pool doesnot support demands for custom capabilities in yaml pipeline.
Below yaml pipeline works only for self-hosted agent pool.
pool:
name: Default
demands: Tag -equals Agent1