Could you please help me to understand and write manual validation step in same job using azure devops yaml pipeline
I have tried adding manual validation step under same job in -task and getting error
If you mean Manual Validation task in pipeline, then please note that this task is only supported in YAML pipelines and can only be used in an agentless job of a YAML pipeline.
We need to specify the pool: server for that task in an agentless job.
So, it's not supported if you want to add the Manual Validation task in an agent job with other agent steps/tasks(e.g PowerShell task or any other executable tasks). However, we can add Manual Validation task with other Agentless jobs supported tasks in the same job. See Server jobs for details.
Official example for your reference:
jobs:
- job: waitForValidation
displayName: Wait for external validation
pool: server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
Related
We have classic pipelines setup with pre-deployment approvals to defer production release to the time decided for the release. like below
This is the kind of setup need in YAML pipeline
recently company adopted azure devops yaml and all the pipelines are migrating to Azure devops YAML now.
i was requested to setup the similar structure in YAML pipelines where people are able to approve and defer the release to the specific time.
how to achieve similar set up in YAML pipelines?
Unfortunately there is no option out-of-the-box. There are workarounds, maybe there is something that suits you:
1
On this community request several alternative are described like:
Using Classic pipelines with defer, kicking off the YAML pipeline with powershell.
2
Another alternative is described here, the use of Sleep in the pipeline, which can be configured via parameters.
trigger:
- main
pool:
vmImage: ubuntu-latest
parameters:
- name: delay
displayName: Defer deployment (in seconds)
type: string
default: 0
steps:
# Delay further execution of a workflow by a fixed time.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'Start-Sleep -s ${{ parameters.delay }}'
- script: 'echo after ${{ parameters.delay }} minutes'
3
For an agentless job you make use of the Delay task.
While using this task, an input parameter can be used to set the delay to a variable value.
Conclusion
All alternatives sound pretty hacky.
I have a recurring issue with an Azure Pipeline YAML template that cannot be cancelled once started. The template defines a release stage that includes 3 jobs:
stage: Release
jobs:
- job: Wait
steps:
- task: Delay#1
inputs:
delayForMinutes: ${{ parameters.ReleaseDelay }}
- deployment: Deploy
dependsOn:
- Wait
# several more tasks that work fine
- job: Cleanup # works fine also
Our workflow is such that sometimes, we want to go ahead and approve a deployment, but we would like to queue it to wait for a couple hours, e.g. to prep updates to release after business hours. It works fine normally.
The issue comes if we try to cancel the Wait task through the pipeline Web UI. Once the release environment approval has been granted and the wait task has started, the pipeline execution cannot be cancelled.
I've tested this with multiple pipelines that reuse this template and it is a persistent/reproducible issue.
So, my question is, is the Microsoft built-in Delay task inherently un-interruptable, or is my dependency in the successor task somehow locking the Delay from being able to be cancelled?
The pipeline will show a status of "Cancelled" once I click the confirmation button to cancel the run, but the task continues to execute as if I had not done so. Crucially, it also does not cancel at the end of the Wait task. It will start straight into the deployment job as if it never received the order to cancel.
The Azure Pipelines docs do not mention the Delay task being un-interruptable, and I can cancel other tasks at different places in the pipeline which also have dependencies defined, so I don't think it's the fault of the dependency declaration, but that's also a secondary candidate for investigation.
You could investigate using the manual validation task instead of the delay task
Using this you could set a timeout but have the ability to shortcut the timeout by resuming the pipeline. Set the task to "resume" once the timeout has been reached.
Your YAML would look something like this
stage: Release
jobs:
- job: waitForValidation
displayName: Wait for external validation
pool: server
steps:
- task: ManualValidation#0
timeoutInMinutes: ${{ parameters.ReleaseDelay }}
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
- deployment: Deploy
dependsOn:
- waitForValidation
# several more tasks that work fine
- job: Cleanup # works fine also
Note that this type of task can only be run on on an "Agentless" job so don't forget to set the pool on that job to "server" e.g. pool: server
I'm converting to a full YAML AzDO pipeline and need to wait for manual validation for certain stages of my pipeline. Added the new ManualValidation task into a serverless job, however it fails immediately with no details about why. I did add a Delay task in there as well (just as a sanity check to make sure my serverless job was actually running successfully), and it runs fine.
- job: waitForValidation
displayName: Wait for external validation
pool: Server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: Delay#1
inputs:
delayForMinutes: '1'
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
me#email.com
you#email.com
instructions: 'Please validate deployment can continue and resume'
onTimeout: 'reject'
These are the docs I'm using:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-validation?view=azure-devops&tabs=yaml
I also dropped into the GitHub project just to make sure the task is still version 0 (it is).
Suggestions on why this might be failing and/or ways I can get some more details in the pipeline about WHY it failed?
Turns out we are actually using AzDO Server, not AzDO Services (thanks, Microsoft for naming them so similarly) and this task is not yet available in the Server version :(
For anyone also frustrated by this lack of functionality on-prem, here’s the documentation on using Deployment Jobs and some about Environments
We are able to get most of the functionality we were looking for this way, thou it does require setting up environments.
I have a pipeline I created in Azure DevOps that builds an Angular application and runs some tests on it. I separated the pipeline into two jobs, Build and Test. The Build job completes successfully. The Test job checks out the code from Git again even though the Build job already did it. The Test job needs the files created in the Build job in order to run successfully like the npm packages.
Here is my YAML file:
trigger:
- develop
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
system.debug: false
stages:
- stage: Client
pool:
name: Windows
jobs:
- job: Build
displayName: Build Angular
steps:
- template: templates/angularprodbuild.yml
- job: Test
displayName: Run Unit and Cypress Tests
dependsOn: Build
steps:
- template: templates/angularlinttest.yml
- template: templates/angularunittest.yml
- template: templates/cypresstest.yml
My agent pool is declared at the stage level so both jobs would be using the same agent. Also I added a dependsOn to the Test job to ensure the same agent would be used. After checking logs, the same agent is in fact used.
How can I get the Test job to use the files that were created in the Build job and not checkout the code again? I'm using Angular 11 and Azure DevOps Server 2020 if that helps.
use files checked out from previous job in another job in an Azure pipeline
If you are using a self-hosted agent, by default, none of the workspace are cleaned in between two consecutive jobs. As a result, you can do incremental builds and deployments, provided that tasks are implemented to make use of that.
So, we could use - checkout: none in the next job to skip checking out the same code in the Build job:
- job: Test
displayName: Run Unit and Cypress Tests
dependsOn: Build
steps:
- checkout: none
- template: templates/angularlinttest.yml
But just as Bo Søborg Petersen said, DependsOn does not ensure that the same agent is used. You need add a User Capability to that specific build agent then in the build definition you put that capability as a demand:
pool:
name: string
demands: string | [ string ]
Please check this document How to send TFS build to a specific agent or server for some more info.
In the test job, we could use predefined variables like $(System.DefaultWorkingDirectory) to access the files for Node and npm.
On the other hand, if you are using the Hosted agent, we need use PublishBuildArtifacts task to publish Artifact to the azure artifacts, so that we could use the DownloadBuildArtifacts task to download the artifacts in the next job:
jobs:
- job: Build
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: npm test
- task: PublishBuildArtifacts#1
inputs:
pathtoPublish: '$(System.DefaultWorkingDirectory)'
artifactName: WebSite
# download the artifact and deploy it only if the build job succeeded
- job: Deploy
pool:
vmImage: 'ubuntu-16.04'
steps:
- checkout: none #skip checking out the default repository resource
- task: DownloadBuildArtifacts#0
displayName: 'Download Build Artifacts'
inputs:
artifactName: WebSite
downloadPath: $(System.DefaultWorkingDirectory)
You could check Official documents and examples for some more details.
Assume that the agent is cleaned between jobs, so to access the files, you need to create an artifact during the build job and then download it during the test job.
Also, DependsOn does not ensure that the same agent is used, only that the second job runs after the first job.
Also you can set the second job to not checkout the code with "-checkout: none"
I have a pipeline with 2 stages - a build/test stage, and a Teardown stage that cleans up external resources after the build/test stage. The teardown stage depends on some state information that gets generated in the build/test stage. I'm trying to use Azure hosted agents to do this. The problem is that the way I have it now, each stage deploys a new agent, so I lose the state I need for the teardown stage.
My pipeline looks something like this:
trigger:
- master
stages:
- stage: Build_stage
jobs:
- job: Build_job
pool:
vmImage: 'ubuntu-latest'
steps:
- task: InstallSomeTool#
- script: invoke someTool
- script: run some test
- stage: Teardown_stage
condition: always()
jobs:
- job: Teardown_job
pool:
vmImage: 'ubuntu-latest'
steps:
- script: invoke SomeTool --cleanup
The teardown stage fails because it's a brand new agent that knows nothing about the state created by the previous invoke someTool script.
I'm trying to do it this way because the Build stage creates some resources externally that I want to be cleaned up every time, even if the Build stage fails.
Is it possible to have an Azure hosted build agent persist between
pipeline stages?
No, you can't. The hosted agent are all randomly assigned by server. You could not use any script or command to specify a specific one.
Since you said that the Build_Stage will create some resources externally, so that you want to execute clean up to clean it.
In fact, for this, you can execute this clean up command as the last steps in Build_Stage. If this, whether using hosted or private agent will not affect what you want.