Azure Pipelines: Conditional run of task groups - azure-devops

In Azure pipelines, I can conditionally run a specific task/job. However, if I have a Task group, I don't see any option to specify conditions on its execution.
The only way am able to do this is to create a new Job altogher which runs the task group and specify the conditinal expression in the Job.
This has drawback - Jobs are run on different agents. In this case, a new agent is created only to run that task group under a condition. This slows down the deployment
So - question is - how to conditinally execute a Task Group?
Thanks

According to the doc: Task conditions (such as "Run this task only when a previous task has failed" for a PowerShell Script task) can be configured in a task group and these settings are persisted with the task group.
If you want to custom task group level condition, It is not supported by azure devops for the time being.
I found a ticket on our UserVoice site , which is our main forum for product suggestions. you can vote and add your comments for this feedback. The product team would provide the updates if they view it.
Update1
As a workaround, We can define these tasks in the yaml template and then we can call different templates from a pipeline YAML depending on a condition.
Sample code:
parameters:
- name: experimentalTemplate
displayName: 'Use experimental build process?'
type: boolean
default: false
steps:
- ${{ if eq(parameters.experimentalTemplate, true) }}:
- template: experimental.yml
- ${{ if not(eq(parameters.experimentalTemplate, true)) }}:
- template: stable.yml

Related

Defer Production release to run at customised time in azure devops yaml pipeline

We have classic pipelines setup with pre-deployment approvals to defer production release to the time decided for the release. like below
This is the kind of setup need in YAML pipeline
recently company adopted azure devops yaml and all the pipelines are migrating to Azure devops YAML now.
i was requested to setup the similar structure in YAML pipelines where people are able to approve and defer the release to the specific time.
how to achieve similar set up in YAML pipelines?
Unfortunately there is no option out-of-the-box. There are workarounds, maybe there is something that suits you:
1
On this community request several alternative are described like:
Using Classic pipelines with defer, kicking off the YAML pipeline with powershell.
2
Another alternative is described here, the use of Sleep in the pipeline, which can be configured via parameters.
trigger:
- main
pool:
vmImage: ubuntu-latest
parameters:
- name: delay
displayName: Defer deployment (in seconds)
type: string
default: 0
steps:
# Delay further execution of a workflow by a fixed time.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'Start-Sleep -s ${{ parameters.delay }}'
- script: 'echo after ${{ parameters.delay }} minutes'
3
For an agentless job you make use of the Delay task.
While using this task, an input parameter can be used to set the delay to a variable value.
Conclusion
All alternatives sound pretty hacky.

Can I use one AzureDevOps pipeline to run other pipelines?

I would like to have a master pipeline capable of running the pipelines of our system's individual components. I'd also like to be able to run any of those components' pipelines individually. Additionally, some of the component pipelines are configured using yaml, while others are using the classic approach. (I'm not sure if that figures into any possible solutions to this problem.) Those that are configured using yaml typically contain multiple jobs, and I'd need all of the jobs to run in those cases.
Using approach #2 recommended here, I tried the following:
jobs:
- job: build_and_deploy
displayName: Build and Deploy
cancelTimeoutInMinutes: 1
pool:
name: some-pool
steps:
- checkout: self
- template: component_one_pipeline.yml
- template: component_two_pipeline.yml
I receive an error for the following "unexpected values": trigger, resources, name, variables, and jobs. I'm guessing these aren't allowed in any yaml file referenced in the template step of another pipeline yaml file. As I mentioned above, though, I need these values in their files because we need to run the pipelines individually.
If possible, could someone point me in the direction of how to get this done?
EDIT: I have also tried the approach given here. I was thinking I'd have a master pipeline that essentially did nothing except serve as a trigger for all of the child pipelines that are supposed to run sequentially. Essentially, the child pipelines should subscribe to the master pipeline and run when it's done. I ended up with the following 2 files:
# master-pipeline.yml
trigger: none
pool:
name: some agent pool
steps:
- script: echo Running MASTER PIPELINE
displayName: 'Run master pipeline'
#child-pipeline.yml
trigger: none
#- testing-branch (tried these combinations trying to pick up master run)
#- main
pool:
name: some agent pool
resources:
pipelines:
- pipeline: testing_master_pipeline
source: TestingMasterPipeline
trigger: true
steps:
- script: echo Running CHILD PIPELINE 1
displayName: 'Run Child Pipeline 1'
Unfortunately, it's not working. I don't get any exceptions, but the child pipeline isn't running when I manually run the master pipeline. Any thoughts?
Thanks in advance.
The way that those approaches you linked work, and Azure DevOps build triggering works in general, is that a build completion can trigger another build, and you have to have the trigger in build to be triggered. So:
Yaml templates can't have things like triggers, so they won't really help you here (though you can of course split any of the individual pipelines to templates). Triggers are in the main yaml pipeline fail, which references the template-files. So you can't have a individual component pipelines as templates.
Yaml pipelines can be chained with the resources-declaration mentioned in the first link. The way this works is that the resource declaration is in the pipeline to be triggered, and you configure the conditions (like branch filters: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops#branch-filters) to the pipeline to be triggered. For example, in your component pipeline you declare the master pipeline as resource, and set the conditions when the component pipeline will be triggered, like when the master pipeline is run against /release/* -branch. Or just set the trigger to true in order to trigger the component pipeline from any succesful run of the master pipeline. The component pipeline can still have its own pipeline triggers at the start of the pipeline declaration.
The classic build definitions can also be chained via edit build definition -> triggers -> build completion (see, for example, here: https://jpearson.blog/2019/03/27/chaining-builds-in-azure-devops/). This works the same way as with yaml pipelines; you configure the conditions for this the classic pipeline to trigger, so add the master pipeline as trigger to the component pipelines. Again, you can also set pipeline triggers for the component pipeline.
The limitation here is, that a classic pipeline can be triggered by an yaml pipeline, but not vice versa. A similar limitation in the yaml resources-declaration; they can't be triggered by a classic pipeline. If you need such triggering, or otherwise find the "native" triggers not to be enough, you can of course shoot an Azure DevOps API call in either type of pipeline to trigger any pipeline. See: https://blog.geralexgr.com/cloud/trigger-azure-devops-build-pipelines-using-rest-api, or just search for the azure devops rest api and associated blog posts that trigger the api with powershell, the rest api -task or by some other means.
As it turns out, I needed to set the pipelines' default branch to the one I was testing on for things to work correctly. The code in my original post works fine.
If you run into this problem and you're using a testing branch to test your pipelines, check out the information here on how to configure your pipeline to listen for triggers on your branch. Apparently a pipeline only listens for them on its default branch. Note: The example in the link uses the "classic" approach to pipeline configuration as an example, but you can reach the same page from a yaml configuration's edit screen by clicking the 3 dots on the right and selecting "Triggers."
Hope this helps someone.

Checking if a task is added more than once inside a job - Azure yaml templates

We use the "extends" templates feature of Azure yaml pipelines for security purposes. One requirement we have is to not allow the use of a specific task inside a job more than once and if it happens, validation just should fail. I know I can iterate through the job steps and look for the existence of a specific task but is there a way to check how many times a task appears inside a job?
I checked length function but doesn't meet my needs as I'm not able to filter the steps for a specific task.
An example of a check we do today:
${{ each step in parameters.steps }}:
${{ if contains(step.task, 'TaskA') }}:
'TaskA is not allowed etc.' : error
It depends on how you define the task in the yaml. It's recommended to check the definition, and avoid duplication of the tasks directly.
Or you could put the task in main yaml not the template, use sample script here to validate the task in the template, if it exists, fail the pipeline.

Delay task in Azure Pipeline cannot be cancelled

I have a recurring issue with an Azure Pipeline YAML template that cannot be cancelled once started. The template defines a release stage that includes 3 jobs:
stage: Release
jobs:
- job: Wait
steps:
- task: Delay#1
inputs:
delayForMinutes: ${{ parameters.ReleaseDelay }}
- deployment: Deploy
dependsOn:
- Wait
# several more tasks that work fine
- job: Cleanup # works fine also
Our workflow is such that sometimes, we want to go ahead and approve a deployment, but we would like to queue it to wait for a couple hours, e.g. to prep updates to release after business hours. It works fine normally.
The issue comes if we try to cancel the Wait task through the pipeline Web UI. Once the release environment approval has been granted and the wait task has started, the pipeline execution cannot be cancelled.
I've tested this with multiple pipelines that reuse this template and it is a persistent/reproducible issue.
So, my question is, is the Microsoft built-in Delay task inherently un-interruptable, or is my dependency in the successor task somehow locking the Delay from being able to be cancelled?
The pipeline will show a status of "Cancelled" once I click the confirmation button to cancel the run, but the task continues to execute as if I had not done so. Crucially, it also does not cancel at the end of the Wait task. It will start straight into the deployment job as if it never received the order to cancel.
The Azure Pipelines docs do not mention the Delay task being un-interruptable, and I can cancel other tasks at different places in the pipeline which also have dependencies defined, so I don't think it's the fault of the dependency declaration, but that's also a secondary candidate for investigation.
You could investigate using the manual validation task instead of the delay task
Using this you could set a timeout but have the ability to shortcut the timeout by resuming the pipeline. Set the task to "resume" once the timeout has been reached.
Your YAML would look something like this
stage: Release
jobs:
- job: waitForValidation
displayName: Wait for external validation
pool: server
steps:
- task: ManualValidation#0
timeoutInMinutes: ${{ parameters.ReleaseDelay }}
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
- deployment: Deploy
dependsOn:
- waitForValidation
# several more tasks that work fine
- job: Cleanup # works fine also
Note that this type of task can only be run on on an "Agentless" job so don't forget to set the pool on that job to "server" e.g. pool: server

How to schedule stage deployments in Azure DevOps Pipelines?

With the classic Azure DevOps release pipeline our release flow was very easy to setup.
We had a build pipeline running many times during the day. On success it deployed to our development environment. Every night the latest successful deployment to dev was released to our test environment (running automated tests for hours), before it deployed to UAT. But often we also need to deploy to test during the day, if we have a new change which needs to go directly into test or UAT. The classic pipelines allowed us to skip a stage, or deploy if the previous was only partly successful.
1) Development - automatic
2) Test - nightly or manually
3) UAT - nightly or manually
4) Staging - manual approval
5) Production - manual approval
With the multi-stage pipelines the same flow seems to be very difficult to do. At least when it comes to making it as a single deployment pipeline. The first part is fine. We can have our build trigger the development deployment. But how can we hold back the release to the test environment until 0:30am, while still retain the ability to also release it manually? If I created a separate test environment pipeline, then it could work if it had no triggers, but a schedule.
Same with the UAT, as we also need the flexibility to manually run UAT deployments, then it would also need to go into its own pipeline. Releases to our staging and production environment we "gate" with manual approvals, which is fine.
While this technically could work, if we split the deployment pipeline into multiple pipelines it really gets difficult to manage "a release". Not to say that it kind of goes against the whole multi-stage pipeline principle if we create a separate pipeline per stage.
But with this being so easy to setup like this in the classic pipelines, then I cannot really imaging that other companies have not run into the same limitations. Is it just me who cannot see the light, or cannot this really not be done with multi-stage pipelines?
manually run UAT deployments
We could add Azure DevOps Multi-Stage Pipelines Approval Strategies in the yaml build.
Steps:
Open the tab Environments and click the button New environment-> Click the button approvals and checks-> My environment name is TEST.
Then use it in the yaml pipeline(just a sample):
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'TEST'
- job: A1
steps:
- script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- stage: B
condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Result:
We could also configure schedule Trigger and use them in the multi-stage pipelines.
Note: The schedule trigger and Approval Strategies are using in the stage level.
For scheduled jobs: you can use something like this in your YAML:
(Copied from Microsoft documentation)
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.
For manual jobs, you can use the Create Release button to create and deploy a release manually. Do note that sometimes this can create a conflict with the schedule. Also, to "hold back a release" put an approver on the release, and then when approving, defer the release:
noting that it's in UTC, and it defaults to tomorrow - you can change it to any time after now.