I am working with 9 multiple repositories with their own separate azure pipelines. However, during the deployment stage, there are some jobs/steps that are dependent on the status of a job from a separate deployment pipeline. If the resource (e.g A Resource Group) was not yet deployed, naturally the job/step dependent on it would fail. I cannot simply use dependsOn because it is from another pipeline.
Questions:
Is there a way for me to check, monitor, or wait (with a specified period of time) for a particular resource to be deployed?
Are these pipelines in need of restructuring?
I've seen documentation for the Multi-Repo pipeline. However, each pipeline must be deployable on its own and not waiting for another pipeline to finish.
If you use YAML based pipeline you may consider pipeline triggers. It will allow you to trigger one pipeline after another. You may also apply stages filter
In this sprint, we added support for 'stages' as a filter for pipeline resources in YAML. With this filter, you don't need to wait for the entire CI pipeline to be completed to trigger your CD pipeline. You can now choose to trigger your CD pipeline upon completion of a specific stage in your CI pipeline.
Related
I'm currently migrating code from Gitlab to Azure DevOps and Azure Pipelines. In Gitlab I have a two repo setup like this:
Pipeline A in repo 1 runs build & UTs. It triggers:
Pipeline B in repo 2 runs solution tests. If it passes:
Pipeline A publishes artifacts
You can do this in Gitlab using a depend strategy in your trigger job. This means that if Pipeline B fails, Pipeline A also fails, and so the publishing stage in Pipeline A is skipped.
Am I right in thinking that this setup is not natively supported in Azure Pipelines, e.g. with pipeline completion triggers? I don't think you can have one pipeline pausing half way through and waiting for another, or have the upstream pipeline's pass/fail status mirror that of the downstream pipeline.
If so, what do you think would be a good solution here? Is it possible to gate PR Build Validation on the downstream pipeline, so if pipeline A passed on repo 1's branch X but pipeline B failed, Azure wouldn't let me merge branch X?
I've had a couple of ideas for hacky workarounds if this isn't supported:
Write a script in pipeline A that kicks off pipeline B, sleeps and periodically polls the API to check whether pipeline B has
passed/failed
Checkout repo 2's code during pipeline A, and run the solution tests there
Do either of them sound sensible?
Is it possible to make the pass/fail status of an Azure pipeline
dependent of the pass/fail status of a downstream pipeline?
There is no such out of box way to achieve this at this moment.
Personally, your ideas are sound. I have encountered similar requests before, but it was about release, but the concept of their implementation is the same.
The main idea is:
Add a powershell task to Pipeline A to call the REST API to queue pipeline B, 2. Cyclically monitor the construction results of pipeline B.
After getting the result of pipeline B, call the log command according to the result of pipeline B.
To set the construction result of pipeline A.
You could checkmy previous thread for some more details:
Is it possible to run a "final stage" in Azure Pipelines without knowing how many stages there are in total?
And the log command set the build result:
##vso[task.complete result=Failed;]DONE
I have Azure DevOps pipeline and I want to run it nightly run with two different agent pool, one dev and one prod.
This is the pipeline with default dev agent pool:
In the schedule setting there is no option to set different agent pool to the runs:
I saw this answer (solution with yaml settings), but I didn't found a way to use it in my pipeline (my pipeline defined in Azure DevOps UI settings).
As you use GUI classic pipelines you could define two different jobs that will run on different agent pools. This way you could have a single pipeline that you will run depending on your schedule.
When using YAML syntax you could define different stages to accomplish the same result.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml
Create a new Stage. The first stage's job will use one pool and the second stage will use a different pool. They can then be scheduled or triggered independently. You can also clone the first stage to save you the time of duplicating the tasks.
Reference
I have two repos on my Azure DevOps project. One for the Cloud Infrastructure deployment and another that contains my application code.
I have a YAML pipeline that is triggered after any of those repos build pipeline finishes. The pipeline looks a bit like this like this:
resources:
pipelines:
- pipeline: MyProject-Code
- pipeline: MyProject-Infrastructure
jobs:
- job: DeployInfrastructure
steps:
# Here are the tasks the deploy the project infrastructure
- job: DeployCode
steps:
# Here are the tasks that deploy the code
I would like to put a condition on the DeployInfrastructure job so it is only executed if the triggering pipeline is the infrastructure one as I do not need to redeploy it if the change only affects the application code.
However, when reading the documentation from Microsoft there does not seem to be a very straightforward way of doing this.
Have a look at Pipeline resource variables
In each run, the metadata for a pipeline resource is available to all
jobs in the form of predefined variables. The is the
identifier that you gave for your pipeline resource. Pipeline
resources variables are only available at runtime.
There are also a number of predefined variables called Build.TriggeredBy.*, amongst them Build.TriggeredBy.DefinitionName, however documentation suggests that for yaml pipeline with pipeline triggers the resource variables should be used instead
If the build was triggered by another build, then this variable is set
to the name of the triggering build pipeline. In Classic pipelines,
this variable is triggered by a build completion trigger.
This variable is agent-scoped, and can be used as an environment
variable in a script and as a parameter in a build task, but not as
part of the build number or as a version control tag.
If you are triggering a YAML pipeline using resources, you should use
the resources variables instead.
I have a problem with multi stage pipelines. Let's say I have pipeline A and pipeline B.
Pipeline A is as follows :
Stage A.1
Stage A.2
Pipeline B is as follows :
Stage B.1
Stage B.2
Those pipelines work on different triggers placed on different repositories.
Sometimes we have the following behavior :
Pipeline A starts stage A.1
Then, before Pipeline A can begin stage A.2, Pipeline B ls launched because of its trigger and starts stage B.1 (please note that pipeline A and B are totally independent one from another)
Only after B.1 has finished, Pipeline A can continue on A.2
I don't complain about the sequential behavior, I don't want parallel runs. But I would like to tell to Azure DevOps to finish a pipeline before it starts another.
To summarize it all, can you tell Azure DevOps to finish a multi stage pipeline before it starts another pipeline ? And I'm not talking about another instance of the same pipeline, I am talking about a completely different pipeline.
This seems like a use case for exclusive locks. You could make an environment (environment A and environment B) in Azure pipelines for each of the pipelines and then apply an exclusive lock policy on these two environments. So you apply the same lock to both of the environments, not two separate locks.
For more information:
https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/pipelines/sprint-172-update#exclusive-deployment-lock-policy
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass#exclusive-lock
I have a question on Azure DevOps Release pipelines. My pipeline workflow is multi-stage where the build triggers the QA stage which then triggers the UAT stage which then triggers the PROD stage.
I use pipeline variables to manage each stage and require pre-approval on the UAT and PROD stages so that a change does not instantly get deployed to every stage sequentially.
My question is how to handle the case where I have multiple servers in an environment. I see that each environment should be treated as a stage but right now, I am treating each server in an environment as a stage where the tasks are run in parallel. This works for the first stage (QA), but becomes ugly for UAT since each server then requires pre-approval instead of the environment.
I have pipeline variables that specify paths for files to be dropped on servers as well. At a server per stage level, this works but not for multiple servers in a stage.
My pipeline currently looks like the picture below with UAT1 and UAT2 each requiring approval. How do I handle multiple servers for the QA and UAT stages, and later PROD?
My issue was that I was using a stage to represent a single server in an environment (i.e. QA, UAT, or PROD) instead of bundling the tasks performed per server into a task group and then use multiple task groups within the stage.
My pipeline now looks like the image below.
Within the stage, there is a task group per server.
The common tasks per server are contained in the task group