Identify build policy responsible for run of pull request build in Azure DevOps pipeline - azure-devops

I would like to identify the build policy for a build that was run by clicking the Queue (or Re-queue) button against a required/optional check from within a pull request. I wish to identify the policy programmatically from within a pipeline; e.g. a script task. Open to any approach, been exploring the az CLI but no luck thus far.
I've setup two build policies against a branch that both target the same build definition - Policy A and Policy B. Both are setup to be run manually - A is required, B is optional. Both will surface in the UI for a pull request as checks - A being required, B being optional. When a build is run by clicking the Queue (or Re-queue) button against either check, I would like to be able to identify which of the two policies the run was initiated from, (which policy provided the Queue or Re-queue button that was clicked).
EDIT: A bit more background on what I'm doing ...
I've got a single pipeline for building an application.
I've recently got a request to update the pipeline to support publishing to Chromatic.
I've added a Publish to Chromatic parameter to the pipeline and a task to push to Chromatic when the parameter is set to true.
I received a subsequent request to make it easier to publish changes from a feature branch to Chromatic. One engineer threw out the idea of having an optional check available in pull requests to give a single button click experience.
While researching my options, I was wondering if it would be possible to enhance the existing pipeline to set the Publish to Chromatic parameter to true during a run. I found this comment on Reddit which ultimately led to me posting here ...
set a default for your parameter (I like to use 'auto') add a script >task near the beginning that reads the pull request comment and sets a variable for you to use in later logic if the parameter is auto . you can even condition this to only run on a PR.
I am aware that I could create a separate pipeline for publishing to Chromatic instead of updating the existing one; that's one of a few options I have. At this point, I'm more-so curious whether or not this particular approach is technically feasible even if I opt not to go forward with it.
Hope that adds some clarity!

The policy that queued the pipeline isn't something that is visible to the pipeline as a pipeline variable. In fact, there doesn't seem to be any indication if the PullRequest was queued manually or automatically.
There might be a few other ways to approach this...
I would start by putting a publishChromatic parameter in the pipeline and then building up conditions in the pipeline execution around this variable. By default, let's assume that the value is false so that if you're manually queueing a pipeline run you can opt-in.
triggers:
- include:
branches:
- develop
parameters:
- name: publishChromatic
displayName: 'Publish build to Chromatic'
type: boolean
default: false
jobs:
- job: Build
variables:
publishChromatic: ${{ parameters.publishChromatic }}
steps:
... pre-execution steps
- task: CmdLine#2
displayName: Publish to Chromatic
condition: and(succeeded(), eq(variables['publishChromatic'], 'true'))
inputs:
script: npx chromatic --project-token=$(CHROMATIC_PROJECT_TOKEN) --branch=$(Build.SourceBranch)
... post execution steps
Option 1: Pull Request Labels
One option might be to inspect the pull request for the presence of a label as outlined in this answer. As a pre-execution step, a simple script could flip the flag when the label is present:
- pwsh: |
$urlFormat = "{0}/{1}/_apis/git/repositories/{1}/pullRequests/{2}/labels?api-version=6.0-preview.1"
$url = $urlFormat -f `
$env:SYSTEM_TEAMFOUNDATIONSERVERURI, `
$env:SYSTEM_TEAMPROJECTID, `
$env:BUILD_REPOSITORY_NAME, `
$env:SYSTEM_PULLREQUEST_PULLREQUESTID
$headers = #{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
$response = Invoke-RestMethod -Uri $url -Method Get -Headers $headers
$labels = $response.value.name
Write-Host "##vso[task.setvariable variable=PullRequestTag]$labels"
displayName: 'Fetch Pull Request Labels'
condition: and( succeeded(), eq(variables['Build.Reason'], 'PullRequest'))
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
- pwsh: |
if ("$(PullRequestLabels)" -like "*chromatic*") {
Write-Host "##vso[task.setvariable variable=publishChromatic]true"
}
condition: and( succeeded(), eq(variables['Build.Reason'], 'PullRequest'))
displayName: 'Check for Chromatic label'
I like this option in that it provides a bit of traceability for which Pull Requests were deployed. Unfortunately, there's no way to queue a build automatically when the PR labels are modified so you'd need to have the tag on the PR before triggering the pipeline.
You could also establish a different pattern such as triggering based on a convention like a value that appears in the name of the Pull Request, etc.
Option 2: Pipeline to Trigger Chromatic
If you'd rather have a Build Validation option labeled 'Deploy to Chromatic' to automate triggering your deployment to Chromatic, a simple option would be to create a pipeline that triggers your pipeline with the publishChromatic parameter.
trigger: none
steps:
- checkout: none
- pwsh: |
$pipelineId = 1234
$urlFormat = "{0}/{1}/_apis/pipelines/{2}/runs?api-version=6.0-preview.1
$url = $urlFormat -f `
$env:SYSTEM_TEAMFOUNDATIONSERVERURI, `
$env:SYSTEM_TEAMPROJECTID `
$pipelineId
$headers = #{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
$body = #{
resources = #{ repositories = #{ self = #{ refName = "$(Build.SourceBranch)" } } }
variables = #{
originalPrId = #{
value = "$(System.PullRequest.PullRequestId)
}
}
templateParameters = #{
publishChromatic = $true
}
}
Invoke-RestMethod -Uri $url -Method Post -Body $body -Headers $headers
displayName: 'Trigger Chromatic Pipeline'
condition: eq(variables['Build.Reason'],'PullRequest')
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
This simple script performs a fire-and-forget approach to triggering your original pipeline.
If you need to have a successful deployment to Chromatic as part of your PR, you could adjust the original pipeline to report a pull-request status.
In your original pipeline, add the following as a post-execution step:
- pwsh: |
$urlFormat = "{0}/{1}/_apis/git/repositories/{2}/pullRequests/{3}/statuses?api-version=6.0-preview.1
$url = $urlFormat -f `
$env:SYSTEM_TEAMFOUNDATIONSERVERURI, `
$env:SYSTEM_TEAMPROJECTID, `
$env:BUILD_REPOSITORY_NAME, `
"$(originalPrId)"
$headers = #{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
$body = #{
status = "succeeded"
description = "completed chromatic regression"
context = #{
name = "qualitygate/chromatic"
}
targetUrl = "http://chromatic.com/your/buildid"
}
Invoke-RestMethod -Uri $url -Method POST -Body $body -Headers $headers
displayName: Report status to PR
condition: and( succeeded(), ne(variables['originalPrId'],''))
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
To require a successful chromatic quality gate, add a Status Check to your Branch Policy with the same name mentioned above qualitygate/chromatic.
Option 3: Further down the Rabbit hole
You can establish even deeper integration with Chromatic by building a custom extension that allows you to add specialized menus to the Pull Request checks menu. The custom menu could include javascript-enabled buttons to trigger your pipeline without the need for a custom pipeline mentioned in Option 2.
While not necessarily dependent on writing a custom extension, you could also create an Azure Function App that listens for webhooks from Chromatic and posts status updates back to your PR with the a custom UI that links back to the Chromatic build. You'd simply need to query the Azure DevOps API to map the branch name in the Chromatic payload to the corresponding PR.

Related

is there a way to prevent a scheduled pipeline to execute again when a first execution hasnt ended?

I have a pipeline that executes every hour and sometimes the execution takes more than an hour and the other execution starts, is there a way to prevent this? and for example the new execution to get queued?
thank you for all the help
It seems you have multiple build agents. Assuming you are using self-hosted build agents, you could specify certain demands of the agent to use only one agent. In this way, if the agent is not free, the build will keep waiting. To use a particular agent, add a demand of Agent.Name equals agentname, check the screenshot below. Agent name can be found in capabilities of the agent.
pool:
name: MyPool
demands:
- myCustomCapability # check for existence of capability
- agent.name -equals agentname # check for specific string in capability
Another way is triggering the pipeline via REST api and through the PowerShell task. You could use the REST API Builds - List to get the detailed build info and check the latest build status:
https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitions={definitions}&api-version=6.0
In the YAML, we could add a powershell task to get the build status, like:
- task: PowerShell#2
inputs:
targetType : inline
script: |
$url = "https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitions={definitionID}&api-version=6.0"
$connectionToken="Your PAT Here"
$base64AuthInfo= [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($connectionToken)"))
$buildPipeline= Invoke-RestMethod -Uri $url -Headers #{authorization = "Basic $base64AuthInfo"} -Method Get
$BuildStatus= $buildPipeline.value.status | Select-Object -first 1
Write-Host This is Build Status: $BuildStatus
This list all the build status for the specify definitions, then use Select-Object -first 1 to get the latest build status. If the status is completed, then queue the build. If the status is not completed, do not queue the build.

Managing a CI/CD Pipeline from another Pipeline - Azure Devops

I have a pipeline(Say A). In it, I have written a PowerShell script which helps me to update a particular package in the solution. After merging the changed code with the master branch using this PowerShell script, it automatically triggers another pipeline(say B) whose triggering depends on the changes in master. I have to control the triggering of this pipeline B from Pipeline A - like get the status of the triggered pipeline B, disable the trigger of pipeline B from A, etc. Please help me with this scenario.
You can use a powershell task to call build rest api to get the status of another pipeline(ie. Pipeline B).
First to get the latest build of Pipeline B, you can use below rest api.
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds?definitions={definitions}&$top={$top}&api-version=5.1
Below is the inline script example in the powershell task to get the build status.
$uri = "$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_apis/build/builds?definitions={definitionId}&`$top=1&api-version=5.1"
 
$result =Invoke-WebRequest -Uri $uri -Method Get -ContentType "application/json" -Headers $headers = #{ Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"}
$status = $result.value[0].status
$env:SYSTEM_ACCESSTOKEN is the predefined variable with which you can refer to the access token directly in the scripts.
To cancel pipeline B in pipeline A you can call update Build rest api. See below example. First get the build from above api, then update the status to cancelling
$build = $result.value[0]
$uriupdate = "$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_apis/build/builds/$($build.id)?api-version=5.1"
$build.status = "cancelling"
$body = $build | ConvertTo-Json -Depth 10
$update = Invoke-RestMethod -Uri $uriupdate -Headers #{Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"} -ContentType "application/json" -Method patch -Body $body
To skip a build when pushing the changes, you can just include [skip ci] in the commit message as Shamrai mentioned.
git commit -m message [skip ci]
I have to control the triggering of this pipeline B from Pipeline A -
like get the status of the triggered pipeline B, disable the trigger
of pipeline B from A, etc.
You can use REST API with PowerShell to control your builds: Builds - List.
To disable the trigger, add scip ci into your commit message: Skipping CI for individual commits
You can use output variable in powershell task. And based on that you can control the next job to execute. This way you don't have to use multiple build pipelines instead multiple jobs in a single pipeline.
You can refer the Microsoft document here

ADS 2019 - How to pass variables between build jobs

Using Azure DevOps Server 2019.1 i am starting to work with Multi jobs, to allow me to split up work onto multiple agents.
The flow itself works fine. I have it setup like this
Begin Job - this basically tests a few variables and Updated the buildnumber
(Depends on Begin Job) RunTest Job - A job to run "multi-configuration", which splits a comma seporated list of task categories
(Depends on RunTest Job) End Job - A trigger build task for a new build in the chain
While the jobs depend on another job, this only seems to affect the time they will start, they will not get access to the information provided by the job that ran before.
Basically what i need is the value of a variable that has been set (buildNumber) in the Begin Job.
I need this version number in the RunTest and End Job.
How can i get this information ? I read articles that this is not possible, but have not seen a valid workaround yet. Does anyone have a decent workaround ?
Did you try multi job output variable:
jobs:
# Set an output variable from job A
- job: A
pool:
vmImage: 'vs2017-win2016'
steps:
- powershell: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the value"
name: setvarStep
- script: echo $(setvarStep.myOutputVar)
name: echovar
# Map the variable into job B
- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable
# remember, expressions require single quotes
steps:
- script: echo $(myVarFromJobA)
name: echovar
Update2:
Using YAML should be the simplest solution. If you insist on Classic build view. You could try to accomplish this by storing the values in a file (json, xml, yaml, what have you), you can read the file in the Job either direct use or re-set the variable again.
When you queue next build, it will not effect the file in source control and the default value will also not change.
Passing variables between jobs in the same stage, it requires working with output variables.
However, according to this, using outputs in a different job is not supported in Classic UI Format.
As workarounds in this scenario, you can share variables via Pipeline Variables(share variables across jobs in same pipeline).
1.You can set a key variable in pipeline variables:
2.Add one Powershell Inline task with content below in your first job:
$url = "$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/definitions/$($env:SYSTEM_DEFINITIONID)?api-version=5.0"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Headers #{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "Pipeline = $($pipeline | ConvertTo-Json -Depth 100)"
# Update an existing variable to its new value
$pipeline.variables.key.value = "value"
####****************** update the modified object **************************
$json = #($pipeline) | ConvertTo-Json -Depth 99
$updatedef = Invoke-RestMethod -Uri $url -Method Put -Body $json -ContentType "application/json" -Headers #{Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"}
write-host "=========================================================="
Write-host "The value of Varialbe key is updated to" $updatedef.variables.key.value
write-host "=========================================================="
3.Run the pipeline we can find the value of key variable is successfully updated:
So you can run the ps script in first job to update the value of key variable, then all next jobs can access the updated variable easily.
Note:
For the script itself, you only need to change lines $pipeline.variables.key.value = "value"(necessary) and Write-host "The value of Varialbe key is updated to" $updatedef.variables.key.value(optional).
If I want to set the variable named MyTest to value MyValue, the lines should be $pipeline.variables.MyTest.value = "MyValue" and Write-host "The value of Varialbe MyTest is updated to" $updatedef.variables.MyTest.value.
To make sure the ps task in one job can access OAuth Token, we should Allow Scripts to Access OAuth Token. Click the agent job name and check the box:
To enable the pipeline has the permission to update pipeline variable (edit build pipeline), go pipeline security to set the Edit build pipeline allow for user xxx(ProjectName) build service.

Setting Git Tag from Azure Devops Build Pipeline on Complete

I'm trying to set a tag with the current version number determined by GitVersion on the GIT commit at the end of a successful build. Feels like I can't be the first one to be doing this, but I'm struggling to find something that works.
Azure Devops Pipeline has a feature in Get Sources to "Tag sources" On Success. I've set this and set to a variable that is set by one of the Agent Tasks I have (GitVersion)
I can see in the debug logs that this variable is getting set by the GitVersion component that I've added to the pipeline.
2019-12-06T20:54:20.2390794Z ##[debug]Processed: ##vso[task.setvariable variable=GitVersion.MajorMinorPatch;]2.98.0
However if I leave it just as this, I get a tag created as "v$(GitVersion.MajorMinorPatch)" which means that at the time that the tag is being created that that variable no longer exists.
The Tag Format help tooltip says
"Tag format could be a combination of user-defined or pre-defined variables that have a scope of "All". For example: '$(Build.DefinitionName)$(Build.DefinitionVersion)$(Build.BuildId)$(Build.BuildNumber)$(My.Variable)'"
So I guess the problem is that this variable created during the pipeline does not have a scope of All.
I then tried adding a pipeline variable to the pipeline of "GitVersion.MajorMinorPatch" with the hope that this was at the right scope and hoping that when the "task.setvariable" command is run, that this will set the variable value of this higher scoped variable.
However in this case I just got a tag "v" created.
So I am a bit stuck. Somehow I need to be able to dynamically create or set a variable at scope ALL with the value I want to tag here.
I'd be really grateful for any ideas on this.
If you are doing a yaml pipeline, you can add the following steps
- checkout: self
persistCredentials: true
## Rest of pipeline ##
- script: |
git tag $(GitVersion.NugetVersionV2)
git push origin $(GitVersion.NugetVersionV2)
workingDirectory: $(Build.SourcesDirectory)
The persistCredentials allows the token to be automatically passed to other git commands. Note the assignment of workingDirectory, otherwise I had an error that the location was not a git repository.
For an annotated tag rather than lightweight tag, the syntax would look like this...
- script: |
git tag -a <tagname> -m <message>
git push origin <tagname>
To get a user/date against it you need to set the user name/email as well e.g.
- script: |
git config --global user.name "BuildService"
git config --global user.email "autobuild#fabrikam.com"
git tag -a <tagname> -m <message>
git push origin <tagname>
For this to work, the Project Collection Build Server account (not the Project Build Service Accounts group) needs to be allocated the Contribute permission for the Repositories
Expanding on the excellent answer from Paul Hatcher, I'd like to add that for me the account was called Project Collection Build Service in Azure DevOps Server 2019. This also seems to be in line with the current Microsoft documentation.
Sorry for the answer, my reputation does not yet suffice to comment.
I can see in the debug logs that this variable is getting set by the
GitVersion component that I've added to the pipeline.
The variable GitVersion.MajorMinorPatch you saw from the log is a step-level variable, which means its life cycle is only start from the current GitVersion task.
As the definition you are referring, it scope must to all. This means is must be a global variable. For example, the predefined variables that the system default have, and the customized variables which specified in the Variables tab.
Based on the GitVersion task compile and work logic, in fact, the GitVersion.MajorMinorPatch value is generated and stored as current build's build number:
So, the most convenient method for you to tag the GitVersion.MajorMinorPatch value to repos is using $(Build.BuildNumber):
v$(Build.BuildNumber)
And this is my result:
Update:
To add the GitVersion.MajorMinorPatch which generated by the GitVersion task into Variables, please apply below scripts into PowerShell task:
$connectionToken="{PAT Token}"
$urlget = "https://dev.azure.com/{org}/{project}/_apis/build/definitions/$(System.DefinitionId)?api-version=5.1"
$base64AuthInfo = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($connectionToken)"))
$getdef = Invoke-RestMethod -Headers #{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method GET -ContentType application/json -Uri $urlget
Write-Host Pipeline = $($getdef | ConvertTo-Json -Depth 100)
$bvalue=#"
{
"value": "$(GitVersion.MajorMinorPatch)"
}
"#
$getdef.variables | add-member -Name "GitVersion.MajorMinorPatch" -value (Convertfrom-Json $bvalue) -MemberType NoteProperty -Force -PassThru
$getdef = $getdef | ConvertTo-Json -Depth 100
$getdef | clip
$urlput = "https://dev.azure.com/{org}/{project}/_apis/build/definitions/$(System.DefinitionId)?api-version=5.1"
$putdef = Invoke-RestMethod -Uri $urlput -Method PUT -Body $getdef -ContentType "application/json" -Headers #{Authorization=("Basic {0}" -f $base64AuthInfo)}
As I mentioned previously, I still don't think it is available to specify $(GitVersion.MajorMinorPatch) in Tag format.
Still strongly suggest you by calling $(Build.BuildNumber) to tag the $(GitVersion.MajorMinorPatch) value
- pwsh: |
# Construct PAT authentication header
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f "user",$env:SYSTEM_ACCESSTOKEN)))
$headers = #{Authorization=("Basic {0}" -f $base64AuthInfo)}
$url="$(System.CollectionUri)/$(System.TeamProject)/_apis/git/repositories/$(Build.Repository.ID)/annotatedtags?api-version=5.0-preview.1"
$body = #{name = "$(GitVersion.MajorMinorPatch)"
message = "automatically added"
taggedObject = #{
objectId = "$(Build.SourceVersion)"
}
} | ConvertTo-Json
Invoke-RestMethod -Uri $url -Headers $headers -Method Post -ContentType "application/json" -Body ($body)
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
displayName: 'Add tag'

How to add a manual intervention step in Azure Pipelines yaml

How do you add a manual intervention step into a multi-stage Azure Devops YAML pipeline?
In jenkins you can do some thing like:
stage ('approve-prod') {
steps {
input "Approve deployment to production?"
}
}
I am looking for the equivalent in Azure Devops YAML.
Note: this is for the newly released multi-stage Azure Devops pipelines, not the old style release pipelines. Related announcement here https://devblogs.microsoft.com/devops/whats-new-with-azure-pipelines/
Microsoft have now made available a brand new official Manual Validation task that allows for manual intervention to be added into a YAML pipeline.
Quick example of how to use this task is as follows:
jobs:
- job: waitForValidation
displayName: Wait for external validation
pool: server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
Some key constraints to be aware of:
This task is only supported in YAML pipelines
Can be used only in an agentless job of a YAML pipeline.
Azure DevOps/Pipelines now has a feature called Environments which supports approvals.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#approvals
We are using them as a workaround. Basically we have specified two environments ApprovalNotRequired and ApprovalRequired in Azure DevOps. On the latter we have specified who can approve deployments. Then in the pipeline we reference the environment like this.
- stage: 'Approval not required'
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'ApprovalNotRequired'
strategy:
runOnce:
deploy:
# whatever
- stage: 'Approval required'
jobs:
- deployment: 'MyDeployment2'
displayName: MyDeployment2
environment: 'ApprovalRequired'
strategy:
runOnce:
deploy:
# whatever
The first stage will run without interference and the second will pause until it's approved.
This doesn't appear to be available yet, but there is a GitHub Issue tracking this:
https://github.com/MicrosoftDocs/vsts-docs/issues/4241
From the issue:
So what I heard from the product team is that this "approval per stage" policy isn't available yet but is on their backlog.
There is also a Roadmap work item tracking it:
https://dev.azure.com/mseng/AzureDevOpsRoadmap/_workitems/edit/1510336/
Because there's a long time since Microsoft is ignoring this, and because this is a critical missing functionality , I will add an workaround here (for the moment, it's working only to ignore the entire step for all machines in case of a multi stage YAML but I think this can be solved also, but I am not looking into it for the moment).
Unfortunately, there is a task that needs to be added before each task. This can be solved also by iterative insertion (https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops).
Shortly, in order to be able to ignore a specific task:
T1 is checking the build run for "IgnoreStep"tag. If found, it will set IgnoreStep variable to true and remove the tag
T2 is running only if previous IgnoreStep is on false
When something is failing and I want to ignore the step, I will add "IgnoreStep" tag for the run and retry.
For adding tags, I am using the API because there is no task to do it yet. For request details, F21 in Chrome and check what it sent to server after you will add a tag, and export the request to power shell.
Below you have the YAML:
trigger: none
jobs:
- deployment: Dev
environment:
name: Dev
resourceType: virtualMachine
tags: online
strategy:
runOnce:
deploy:
steps:
- task: PowerShell#2
displayName: CheckIfWeShouldIgnoreStep
name: CheckIfWeShouldIgnoreStep
inputs:
targetType: 'inline'
script: |
$user = "user"
$pass= "pass"
$secpasswd = ConvertTo-SecureString $pass -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($user, $secpasswd)
$response = Invoke-RestMethod -Uri "https://server/tfs/collection/projectId/_apis/build/builds/$(Build.BuildId)/tags" `
-Method "GET" `
-Headers #{
"accept"="application/json;api-version=6.0;excludeUrls=true;enumsAsNumbers=true;msDateFormat=true;noArrayWrap=true"
} `
-ContentType "application/json" `
-Credential $credential -UseBasicParsing
Write-Host "##vso[task.setvariable variable=IgnoreStep]false"
Write-Host "Tags: $response"
foreach($tag in $response)
{
if($tag -eq "IgnoreStep")
{
Write-Host "##vso[task.setvariable variable=IgnoreStep]true"
Invoke-RestMethod -Uri "https://server/tfs/collection/projectId/_apis/build/builds/$(Build.BuildId)/tags/IgnoreStep" `
-Method "DELETE" `
-Headers #{
"accept"="application/json;api-version=6.0;excludeUrls=true;enumsAsNumbers=true;msDateFormat=true;noArrayWrap=true"
}`
-Credential $credential -UseBasicParsing
}
}
- task: PowerShell#2
displayName: Throw Error
condition: eq (variables.IgnoreStep, false)
inputs:
targetType: 'inline'
script: |
throw "Error"