Delay task in Azure Pipeline cannot be cancelled - azure-devops

I have a recurring issue with an Azure Pipeline YAML template that cannot be cancelled once started. The template defines a release stage that includes 3 jobs:
stage: Release
jobs:
- job: Wait
steps:
- task: Delay#1
inputs:
delayForMinutes: ${{ parameters.ReleaseDelay }}
- deployment: Deploy
dependsOn:
- Wait
# several more tasks that work fine
- job: Cleanup # works fine also
Our workflow is such that sometimes, we want to go ahead and approve a deployment, but we would like to queue it to wait for a couple hours, e.g. to prep updates to release after business hours. It works fine normally.
The issue comes if we try to cancel the Wait task through the pipeline Web UI. Once the release environment approval has been granted and the wait task has started, the pipeline execution cannot be cancelled.
I've tested this with multiple pipelines that reuse this template and it is a persistent/reproducible issue.
So, my question is, is the Microsoft built-in Delay task inherently un-interruptable, or is my dependency in the successor task somehow locking the Delay from being able to be cancelled?
The pipeline will show a status of "Cancelled" once I click the confirmation button to cancel the run, but the task continues to execute as if I had not done so. Crucially, it also does not cancel at the end of the Wait task. It will start straight into the deployment job as if it never received the order to cancel.
The Azure Pipelines docs do not mention the Delay task being un-interruptable, and I can cancel other tasks at different places in the pipeline which also have dependencies defined, so I don't think it's the fault of the dependency declaration, but that's also a secondary candidate for investigation.

You could investigate using the manual validation task instead of the delay task
Using this you could set a timeout but have the ability to shortcut the timeout by resuming the pipeline. Set the task to "resume" once the timeout has been reached.
Your YAML would look something like this
stage: Release
jobs:
- job: waitForValidation
displayName: Wait for external validation
pool: server
steps:
- task: ManualValidation#0
timeoutInMinutes: ${{ parameters.ReleaseDelay }}
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
- deployment: Deploy
dependsOn:
- waitForValidation
# several more tasks that work fine
- job: Cleanup # works fine also
Note that this type of task can only be run on on an "Agentless" job so don't forget to set the pool on that job to "server" e.g. pool: server

Related

ManualValidation under same job ADO

Could you please help me to understand and write manual validation step in same job using azure devops yaml pipeline
I have tried adding manual validation step under same job in -task and getting error
If you mean Manual Validation task in pipeline, then please note that this task is only supported in YAML pipelines and can only be used in an agentless job of a YAML pipeline.
We need to specify the pool: server for that task in an agentless job.
So, it's not supported if you want to add the Manual Validation task in an agent job with other agent steps/tasks(e.g PowerShell task or any other executable tasks). However, we can add Manual Validation task with other Agentless jobs supported tasks in the same job. See Server jobs for details.
Official example for your reference:
jobs:
- job: waitForValidation
displayName: Wait for external validation
pool: server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'

AzDO ManualValidation step failing in YAML pipeline with no explanation of why

I'm converting to a full YAML AzDO pipeline and need to wait for manual validation for certain stages of my pipeline. Added the new ManualValidation task into a serverless job, however it fails immediately with no details about why. I did add a Delay task in there as well (just as a sanity check to make sure my serverless job was actually running successfully), and it runs fine.
- job: waitForValidation
displayName: Wait for external validation
pool: Server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: Delay#1
inputs:
delayForMinutes: '1'
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
me#email.com
you#email.com
instructions: 'Please validate deployment can continue and resume'
onTimeout: 'reject'
These are the docs I'm using:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-validation?view=azure-devops&tabs=yaml
I also dropped into the GitHub project just to make sure the task is still version 0 (it is).
Suggestions on why this might be failing and/or ways I can get some more details in the pipeline about WHY it failed?
Turns out we are actually using AzDO Server, not AzDO Services (thanks, Microsoft for naming them so similarly) and this task is not yet available in the Server version :(
For anyone also frustrated by this lack of functionality on-prem, here’s the documentation on using Deployment Jobs and some about Environments
We are able to get most of the functionality we were looking for this way, thou it does require setting up environments.

How to schedule stage deployments in Azure DevOps Pipelines?

With the classic Azure DevOps release pipeline our release flow was very easy to setup.
We had a build pipeline running many times during the day. On success it deployed to our development environment. Every night the latest successful deployment to dev was released to our test environment (running automated tests for hours), before it deployed to UAT. But often we also need to deploy to test during the day, if we have a new change which needs to go directly into test or UAT. The classic pipelines allowed us to skip a stage, or deploy if the previous was only partly successful.
1) Development - automatic
2) Test - nightly or manually
3) UAT - nightly or manually
4) Staging - manual approval
5) Production - manual approval
With the multi-stage pipelines the same flow seems to be very difficult to do. At least when it comes to making it as a single deployment pipeline. The first part is fine. We can have our build trigger the development deployment. But how can we hold back the release to the test environment until 0:30am, while still retain the ability to also release it manually? If I created a separate test environment pipeline, then it could work if it had no triggers, but a schedule.
Same with the UAT, as we also need the flexibility to manually run UAT deployments, then it would also need to go into its own pipeline. Releases to our staging and production environment we "gate" with manual approvals, which is fine.
While this technically could work, if we split the deployment pipeline into multiple pipelines it really gets difficult to manage "a release". Not to say that it kind of goes against the whole multi-stage pipeline principle if we create a separate pipeline per stage.
But with this being so easy to setup like this in the classic pipelines, then I cannot really imaging that other companies have not run into the same limitations. Is it just me who cannot see the light, or cannot this really not be done with multi-stage pipelines?
manually run UAT deployments
We could add Azure DevOps Multi-Stage Pipelines Approval Strategies in the yaml build.
Steps:
Open the tab Environments and click the button New environment-> Click the button approvals and checks-> My environment name is TEST.
Then use it in the yaml pipeline(just a sample):
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'TEST'
- job: A1
steps:
- script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- stage: B
condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Result:
We could also configure schedule Trigger and use them in the multi-stage pipelines.
Note: The schedule trigger and Approval Strategies are using in the stage level.
For scheduled jobs: you can use something like this in your YAML:
(Copied from Microsoft documentation)
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.
For manual jobs, you can use the Create Release button to create and deploy a release manually. Do note that sometimes this can create a conflict with the schedule. Also, to "hold back a release" put an approver on the release, and then when approving, defer the release:
noting that it's in UTC, and it defaults to tomorrow - you can change it to any time after now.

Checks (approvals) for a deployment job are blocking the entire stage

I have the following YML file for my pipeline:
trigger: none
stages:
# Other stages here...
- stage: Release
jobs:
- deployment: Staging
environment: staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: DownloadBuildArtifacts#0
# ...
- task: AzureRmWebAppDeployment#4
displayName: Deploy in staging
# ...
- deployment: Production
environment: prod
dependsOn: Staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: AzureAppServiceManage#0
displayName: Swap stg-prod slots
# ...
Based on this, to give more context, my thinking is to have 2 stages: the first one is to build my application, the second one is to release in staging (QA) and to production next.
The environment "prod" though, has a check (or approval, whatever you want to call it).
I'm not sure if I'm encountering a bug or not, but what is happening is that when stage 1 completes (the build phase), the release phase of stage 2 is blocked and waiting for approval even considering that "staging" has not any check enabled (only prod).
The easiest workaround is to create different stages, one for staging and one for production, but the thing is that it's not matching my expected behaviour. I'm expecting that the deployment for the job staging completes successfully, then the job "production" waits for the approval.
Do you have any suggestion regarding this? Is this a bug?
Checks (approvals) for a deployment job are blocking the entire stage
Sorry for any inconvenience.
Personally, This behavior is by designed at this moment.
As the document state:
Approvals in multi-stage YAML pipelines
We continue to improve multi-stage YAML pipelines, we now let you add manual approvals to
these pipelines. Infrastructure owners can protect their environments
and seek manual approvals before a stage in any pipeline deploys to
them.
This feature is designed based on stage not environment, so it block the whole stage.
As I test, I could reproduce this issue as you. But your request is reasonable (Personally), this feature should be designed based on environment.
You could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
Hope this helps.

Is it possible to have an Azure hosted build agent persist between pipeline stages

I have a pipeline with 2 stages - a build/test stage, and a Teardown stage that cleans up external resources after the build/test stage. The teardown stage depends on some state information that gets generated in the build/test stage. I'm trying to use Azure hosted agents to do this. The problem is that the way I have it now, each stage deploys a new agent, so I lose the state I need for the teardown stage.
My pipeline looks something like this:
trigger:
- master
stages:
- stage: Build_stage
jobs:
- job: Build_job
pool:
vmImage: 'ubuntu-latest'
steps:
- task: InstallSomeTool#
- script: invoke someTool
- script: run some test
- stage: Teardown_stage
condition: always()
jobs:
- job: Teardown_job
pool:
vmImage: 'ubuntu-latest'
steps:
- script: invoke SomeTool --cleanup
The teardown stage fails because it's a brand new agent that knows nothing about the state created by the previous invoke someTool script.
I'm trying to do it this way because the Build stage creates some resources externally that I want to be cleaned up every time, even if the Build stage fails.
Is it possible to have an Azure hosted build agent persist between
pipeline stages?
No, you can't. The hosted agent are all randomly assigned by server. You could not use any script or command to specify a specific one.
Since you said that the Build_Stage will create some resources externally, so that you want to execute clean up to clean it.
In fact, for this, you can execute this clean up command as the last steps in Build_Stage. If this, whether using hosted or private agent will not affect what you want.