Trigger DevOps pipeline before another pipeline - azure-devops

We have the following 3 pipelines:
CI
CD (Dev/ Qa /Prod)
Integration Tests (Dev/ Qa)
When we run the integration tests against an environment, is there any way to ensure the CD pipeline has been ran first for the given environment, and if not, run it?
If 2 people are working on different branches we would need to ensure that their specific branch is the last one deployed before running the integration tests against that branch? Is this possible?

Trigger DevOps pipeline before another pipeline
As workaround, we could add a inline powershell task in the Integration Tests pipeline (before the test task) to call the REST API Builds - Queue to trigger the pipeline CD and monitor the states of this pipeline until the status is complete, otherwise, wait for 30 seconds, and then loop again until the status of the pipeline CD is complete.
POST https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitionId=$defId&api-version=6.0
You could check my previous similar thread and this thread for some more details.

Related

Is it possible to make the pass/fail status of an Azure pipeline dependent of the pass/fail status of a downstream pipeline?

I'm currently migrating code from Gitlab to Azure DevOps and Azure Pipelines. In Gitlab I have a two repo setup like this:
Pipeline A in repo 1 runs build & UTs. It triggers:
Pipeline B in repo 2 runs solution tests. If it passes:
Pipeline A publishes artifacts
You can do this in Gitlab using a depend strategy in your trigger job. This means that if Pipeline B fails, Pipeline A also fails, and so the publishing stage in Pipeline A is skipped.
Am I right in thinking that this setup is not natively supported in Azure Pipelines, e.g. with pipeline completion triggers? I don't think you can have one pipeline pausing half way through and waiting for another, or have the upstream pipeline's pass/fail status mirror that of the downstream pipeline.
If so, what do you think would be a good solution here? Is it possible to gate PR Build Validation on the downstream pipeline, so if pipeline A passed on repo 1's branch X but pipeline B failed, Azure wouldn't let me merge branch X?
I've had a couple of ideas for hacky workarounds if this isn't supported:
Write a script in pipeline A that kicks off pipeline B, sleeps and periodically polls the API to check whether pipeline B has
passed/failed
Checkout repo 2's code during pipeline A, and run the solution tests there
Do either of them sound sensible?
Is it possible to make the pass/fail status of an Azure pipeline
dependent of the pass/fail status of a downstream pipeline?
There is no such out of box way to achieve this at this moment.
Personally, your ideas are sound. I have encountered similar requests before, but it was about release, but the concept of their implementation is the same.
The main idea is:
Add a powershell task to Pipeline A to call the REST API to queue pipeline B, 2. Cyclically monitor the construction results of pipeline B.
After getting the result of pipeline B, call the log command according to the result of pipeline B.
To set the construction result of pipeline A.
You could checkmy previous thread for some more details:
Is it possible to run a "final stage" in Azure Pipelines without knowing how many stages there are in total?
And the log command set the build result:
##vso[task.complete result=Failed;]DONE

Prevent Cancellation of Deployment Job on PR Changes

We have an Azure DevOps YAML multi-stage pipeline where code is built and then deployed to a sequence of environments. Deployment is achieved using Terraform.
i.e.
Builds -> Test -> Deploy to DEV -> Deploy to TEST - - ->
This pipeline is used for both CI/CD builds and also PR builds. For the PR build, the only part of the deployment stage is running terraform plan on the TF scripts.
For PRs, the pipeline is configured to cancel the ongoing build if changes are pushed to that PR.
The problem we're seeing is that when changes are pushed to the PR and the ongoing build is cancelled, sometimes that cancellation happens during the terraform plan step. This occasionally means that the blob lease taken by terraform plan is not released. From that point onwards, manual intervention is required (break the blob lease) in order for the deployment stages to run successfully.
I believe we can switch off the setting which causes the ongoing PR build to be cancelled if changes are pushed.
But I wondered if there was a way of marking a pipeline step as critical - i.e. it should run to completion if the build is cancelled?
There are other ways of cancelling a pipeline build and there must be other tasks/steps which should not be cancelled part-way through. Such a critical-task setting would cover these situations too.
Not sure if you solved this, but I had the exact same issue. Adding condition: always() to my task forced it to complete when DevOps cancelled the pipeline after additional changes were pushed.
See https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml:
jobs:
- job: Foo
steps:
- script: echo Hello!
condition: always() # this step will always run, even if the pipeline is canceled
I'm afraid you won't be able to achieve this using just YAML. What you can do will require some additional effort (and in some cases quite big):
replace terraform script with bicep for isntance - for me syntax iq quite similar and what you get here is lack of terrafrom state
add in your very first step a check if your state is locked and break a lease if it is needed
I understood that you would like to hear something better than this but at the moment there is no way to mark a task as non-cancellable. However, this sound like a cool feature and candidate for feature request.

Azure Devops - is it possible to queue stages between pipeline runs

I have a Azure Devops pipeline (yml) with a stage that deploys an application to an environment and then runs a bunch of tests against it. This pipeline is triggered when a PR is created. We sometimes have multiple runs of the same pipelines happening at once resulting in two deployments to the same environment happening at the same time.
Is it possible to configure the pipeline in such a way that the deploy stage can only be executed one at a time?
Simple example of what I'm trying to do:
Pipeline (yml) with stages: 1) Build -> 2) Deploy/Test -> 3) Release
Run 1: Build: complete -> Deploy/Test In progress -> Release waiting for stage 2
Run 2: Build: complete -> Deploy/Test waiting for Run 1 stage 2
If you use YAML then make you release in deployment job where you use environment with enabled exclusive lock. However, this has some drawback:
On developer community you can find a feature request for better handling this. And there is one workaround which involved calling REST API. But, as someone already mentioned:
Workarounds involving polling end up with race conditions where two queued builds can both end up starting.
So there is no ideal solution, but if you can please upvote above mentioned request.

Specify order of pipelines and dependencies

I'm having a hard time getting a grasp on this to be honest.
Right now my lab project is as follows:
PR to master -> Triggers Pre-Build Pipeline as condition to merge the code ->
On merge Infrastructure pipe runs only if any changes happen in my Infrastructure folder ->
On merge I want to run my deploy pipeline to deploy my web app to Azure.
The pipes in question do the things they ought to, i.e.
Pre build builds, publishes artifact, runs Unit tests, validates ARM templates.
Infra pipe deploys the necessary infra for my web app such as ResourceGroup, App plan, app service, key vault.
Deploy Pipe downloads the artifact produced in pre deploy and deploys to a stage slot and swaps it to production slot.
What I can't seem to get to work is the pipeline chaining through dependencies, if changes happen to both infra and web app code in master I want the infra pipe to run first and the deploy pipe only if it succeeds.
If I merge only app code I want only the deploy pipe to run regardless if the infra pipe ran or not.
If I merge only infra code I want only the infra pipe to run.
If I merge both app and infra code I want both infra and deploy pipe to run in specific order.
I feel this shouldn't be all that hard to accomplish, but I've spent way too much time trying to solve this to no avail, anyone able to help? :)
Edit:
Hey Sorry #HughLin-MSFT Been Trying to work around this a bit since we're trying to avoid running scripts left and right. :)
I saw you have Build Queuing planned in an upcoming release so for now I think we might have to wait for that.
If I were to merge my deploy and infra pipe, can I use:
trigger:
branches:
include:
- master
paths:
include:
- Infrastructure/*
At stage level and somehow skip a stage instead?
Seen multiple articles mention "Continue if skipped" but can't find any information on how to actually skip a stage.
For the first and second cases, you just need to set Path filters in Triggers, the pipeline only triggers when the file at the specified path is changed. Please refer to this.
For the third case, you can try to add two agent jobs in the infra pipe, add Trigger Azure DevOps Pipeline task to the second agent job to trigger the deploy pipe, and then set Only when all previous jobs have succeeded in Run this job drop-down box for job2. In addition, you need to add a powershell task before the Trigger Azure DevOps Pipeline task, and use a script to detect whether there is app code, run job2 if there is, and cancel job2 if not.
Update:
First you can create a new pipeline and create a variable:changedcode
Use Builds - Get rest api to get the commit , then get the changed code folder with Commits - Get Changes rest api.
Assign changed code folder name as value to changedcode variable.
Set custom conditions for the agent job. In the Infra job, if the changedcode variable value is Infra, run the Infra job. In the Infra job, use the Builds-Queue rest api or Trigger Azure DevOps Pipeline task to trigger the Infra pipeline. The same is true for Deploy job, the only difference is the custom condition expression.
Here is a sample structure in yaml:
jobs:
variables:
changedcode: ""
- job:
steps:
- powershell: |
#Get the changed code folder with rest api
- job: Infra
condition: containsValue($(changedcode), "Infra"))
- powershell: |
#queue Infra pipeline with rest api or Trigger Azure DevOps Pipeline task
- job: Deploy
condition: (containsValue($(changedcode), "deploy")),and ....
- powershell: |
#queue Deploy pipeline with rest api or Trigger Azure DevOps Pipeline task

Is there a way to make an Azure DevOps release only publish the actual latest change from a build pipeline?

I have a situation where two commits were merged to master (e.g. FIRST and SECOND) very close together (seconds apart). Both triggered the build pipeline: FIRST triggered the pipeline first and SECOND triggered it second (the builds ran in parallel). For whatever reason, the build pipeline for commit SECOND finished first, and 30 seconds later the build for commit FIRST finished.
My automatic release pipeline is configured to always get the "latest" artifact from the build pipeline. The sequence of events described above caused the SECOND change to be deployed first, and then the FIRST change was deployed next (since its pipeline finished second) and stomped on the prior release, effectively deploying old bits to the service.
Is there any way to prevent this situation? Even if a build pipeline finishes second for intermittent reasons, I don't want a release to stomp over a more recent change that happened to finish earlier.
EDIT: Thank you to those who suggested/supported the idea of batching builds but that's not an option I'm looking to enable. I still want each commit to trigger its own build (to enable easier assignment of build break cause). I'm just looking for the releases to trigger in the order of commits, not the order of builds finishing.
Thanks!
You can set batch to true in triggers, so the system waits until the build is completed. Set "Batch changes while a build is in progress" option to true in Triggers for Build Pipeline at Azure DevOps or in YAML:
trigger:
batch: true
If you use Pull request, there should be no issues as new push should cancel in-progress run. Check autoCancel in PR triggers
You may need to make the pipelines to run on the same agent. So that the newest queue will wait for the previous queue to complete.
You can follow below steps to confine your pipeline to one agent.
1, Add a custom capability to the agent you want to run the pipeline(project settings->agent pools(select an agent pool)->agents(select a agent)->capabilities)
2,Add a demand to your pipeline : # this works for both microsoft-hosted agents and self-hosted agents
I tested and found microsoft-hosted agent pool doesnot support demands for custom capabilities in yaml pipeline.
Below yaml pipeline works only for self-hosted agent pool.
pool:
name: Default
demands: Tag -equals Agent1