Azure DevOps. Reference Pipeline Settings in PowerShell - azure-devops

I'd like to reference a user who ran pipeline and indicate that a previous specific task in multi-stage pipeline ran successfully or not in a PowerShell script. How can I do that?
Something like:
Write-Host $env:UserThatRanPipeline $env:Task:BuildApp1:SuccessOrFail
So I'd get output:
John Smith Success

I'd like to reference a user who ran pipeline and indicate that a
previous specific task in multi-stage pipeline ran successfully or not
in a PowerShell script.
1.Just as Shayki Abramczyk suggests above, you can use Build.RequestedFor to output the user who runs the pipeline. See predefined variables, you can use something like: Write-Host $(Build.RequestedFor)
2.To get status of your AzureRmWebAppDeployment#4 task, for now there's no predefined variable available to do that job. So you have to do that with some logic...
As a workaround:
You can set one variable SuccessOrFail: 'Succeed' like this in yml:
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
SuccessOrFail: 'Succeed'
And then add one powershell task right after your AzureRmWebAppDeployment#4 task:
- task: PowerShell#2
condition: failed()
inputs:
targetType: 'inline'
script: |
# This script will run only when any previous task failed
echo "##vso[task.setvariable variable=SuccessOrFail]Fail"
3.Make sure there's no custom condition set for your AzureRmWebAppDeployment#4 task. As I know this task is not a independent task, so it will run when all previous tasks succeed.
To sum up:
This ps script will run when AzureRmWebAppDeployment#4 task failed or skipped, and it will reset the value of SuccessOrFail to Fail. So if your AzureRmWebAppDeployment#4 succeeds, the value of $(SuccessOrFail) is Succeed, if it fails or skips, the value would be Fail.
Now the value Succeed represents the task should certainly succeed, and if the value is Fail, it actually represents your task is NotSucceed(Fail+Skip).
The order of your tasks should be: other tasks =>AzureRmWebAppDeployment#4=> PS task above=> other tasks => Your own Powershell Task.

Related

Azure Devops - Run a task only if previous task has run successfully

I am currently writing a pipeline to deploy my infrastructure in Terraform.
I would like that some tasks do not run unless the first task has run successfully. I have found the use of conditions such as success() or failed().
However, upon reading closely, I realize that these conditions are based on Job Status, while in my case there is just one job containing all my tasks.
Is there any way to specify such condition ? And will it be able to refer to a specific task name ? (For example, Task C can run only if Task A ran with success).
You could add a PowerShell task just behind the first task to set a variable (ones for example) by using ##vso[task.setvariable variable=ones]true. Then, use this variable in condition for those tasks that you want them do not run unless the first task has run successfully.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Host "##vso[task.setvariable variable=ones]true"
Then, set condition: condition: eq(variables.ones, 'true') for each task that you do not want it run unless the first task has run successfully. for example:
- task: PowerShell#2
displayName: task4
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
condition: eq(variables.ones, 'true')
The PowerShell task will run if the previous tasks run successfully, in my case, there only is task1. Thus, if the task1 succeed, PowerShell task will set the variable value as true. Next tasks will run when the variable equal true. When the task1 failed, neither PowerShell task will run nor the next tasks.

How to set an azure devops pipeline result as success even if one of the job fails

I am developing an Azure CD YAML pipeline to deploy the result of a CI pipeline onto a Virtual Machine.
Right now and simplifying things a little for the purpose of this post, the CD pipeline is quite simple and consist of a single stage with 3 jobs:
The first job runs scripts to stop a somehow complex applications. This can sometimes fail.
The second job will only run if first job fails. This to give the opportunity for an administrator to do a manual intervention (leveraging the built-in Manual Validation task) and fix the issue encountered in the first job. If the administrator is happy to continue to run the deployment pipeline, he will resume the run of the pipeline.
The third step is the deployment of the new version of the application.
Here is the overall structure of the YAML pipeline:
jobs:
- deployment: StopApplication
environment:
name: 'EnvA' # This environment is a set of virtual machines running self-hosted Azure Agents.
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 1
deploy:
steps:
- task: ...
- job: ManualIntervation
displayName: Manual intervention to fix issue while stopping application
pool: server
dependsOn: StopApplication
condition: failed() # This job will run only if job StopApplication has failed.
timeoutInMinutes: 60
steps:
- task: ManualValidation#0
timeoutInMinutes: 50
inputs:
notifyUsers:
someone#somewhere.com
instructions: 'Do something'
onTimeout: 'reject'
- deployment: DeployApp
dependsOn:
- StopApplication
- ManualIntervation
condition: xor(succeeded('StopApplication'), succeeded('ManualIntervation'))
workspace:
clean: all
environment:
name: 'EnvA' # This environment is a set of virtual machines running self-hosted Azure Agents.
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 1
deploy:
steps:
- task: ...
The problem I have is that if the first deployment job fails but that the administrator review the problem, fixes it, resume the run of the pipeline and that the last deployment job succeeds, Azure DevOps shows my pipeline as Failed (red cross in the DevOps portal) which I can understand as one of the jobs failed.
Nevertheless, functionally, the deployment succeeded and so I would like to set/force the result of the pipeline run as a success so that Azure DevOps display the green check.
Does anyone know the way to achieve this?
I would assume that it is possible otherwise I would not understand why we have the opportunity for manual interventions in a pipeline.
The build result is read-only and cannot be updated after the build is completed. However, You can check out below workarounds to get rid of the Failed sign(Red Cross) in Devops portal.
1, Use continueOnError for the task in StopApplication job. For below example:
jobs:
- deployment: StopApplication
...
steps:
- task: taskName
...
continueOnError: true
When the continueOnError attribute is set to true. The pipeline's result will be set to SucceededWithIssues when the task failed. You will have a exclamation mark instead of red Cross
You also need to change to the condition for job ManualIntervation.
Then change the condition for job ManualIntervation to check if the flag variable was set to true. See below:
- job: ManualIntervation
dependsOn: StopApplication
condition: eq(dependencies.StopApplication.result, 'SucceededWithIssues')
2, Another workaround is to separate the StopApplication job from the others jobs in a different pipeline.
You need to create two pipelines. The first pipeline only have StopApplication job. The second pipeline contains the rest of the jobs. And trigger the second pipeline from the first pipeline using rest api.
In the First pipeline. And a powershell task after the failed task to check if the job status and trigger the second pipeline using rest api. See below example:
- powershell: |
$body = #{
templateParameters=#{
ManualIntervation= "false"
}
}
if("$(Agent.JobStatus)" -eq "Failed"){
$body.templateParameters.ManualIntervation='true'
}
$url = "$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_apis/pipelines/{second-pipelineId}/runs?api-version=6.1-preview.1"
$result5 = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Bearer $(system.accesstoken)"} -Method post -Body (convertto-json $body) -ContentType "application/json"
condition: always() #always run this task
Then in the second pipeline define a runtime parameter ManualIntervation and set the condition for job ManualIntervation see below:
parameters:
- name: ManualIntervation
type: string
default: false
...
- job: ManualIntervation
dependsOn: StopApplication
condition: eq('${{parameters.ManualIntervation}}', 'true')
When the first pipeline is executed. The powershell task will be trigger the second pipeline will the template parameter request body to override the parameter ManualIntervation in the second pipeline. If the ManualIntervation is true. Then the ManualIntervation job will be executed.
So that the second pipeline will be succeeded even if the first pipeline failed.

Azure YAML Get variable from a job run in a previous stage

I am creating YAML pipeline in Azure DevOps that consists of two stages.
The first stage (Prerequisites) is responsible for reading the git commit and creates a comma separated variable containing the list of services that has been affected by the commit.
The second stage (Build) is responsible for building and unit testing the project. This Stage consists of many templates, one for each Service. In the template script, the job will check if the relevant Service in in the variable created in the previous stage. If the job finds the Service it will continue to build and test the service. However if it cannot find the service, it will skip that job.
Run.yml:
stages:
- stage: Prerequisites
jobs:
- job: SetBuildQueue
steps:
- task: powershell#2
name: SetBuildQueue
displayName: 'Set.Build.Queue'
inputs:
targetType: inline
script: |
## ... PowerShell script to get changes - working as expected
Write-Host "Build Queue Auto: $global:buildQueueVariable"
Write-Host "##vso[task.setvariable variable=buildQueue;isOutput=true]$global:buildQueueVariable"
- stage: Build
jobs:
- job: StageInitialization
- template: Build.yml
parameters:
projectName: Service001
projectLocation: src/Service001
- template: Build.yml
parameters:
projectName: Service002
projectLocation: src/Service002
Build.yml:
parameters:
projectName: ''
projectLocation: ''
jobs:
- job:
displayName: '${{ parameters.projectName }} - Build'
dependsOn: SetBuildQueue
continueOnError: true
condition: and(succeeded(), contains(dependencies.SetBuildQueue.outputs['SetBuildQueue.buildQueue'], '${{ parameters.projectName }}'))
steps:
- task: NuGetToolInstaller#1
displayName: 'Install Nuget'
Issue:
When the first stages runs it will create a variable called buildQueue which is populated as seen in the console output of the PowerShell script task:
Service001 Changed
Build Queue Auto: Service001;
However when it gets to stage two and it tries to run the build template, when it checks the conditions it returns the following output:
Started: Today at 12:05 PM
Duration: 16m 7s
Evaluating: and(succeeded(), contains(dependencies['SetBuildQueue']['outputs']['SetBuildQueue.buildQueue'], 'STARS.API.Customer.Assessment'))
Expanded: and(True, contains(Null, 'service001'))
Result: False
So my question is how do I set the dependsOn and condition to get the information from the previous stage?
It because you want to access the variable in a different stage from where you defined them. currently, it's impossible, each stage it's a new instance of a fresh agent.
In this blog you can find a workaround that involves writing the variable to disk and then passing it as a file, leveraging pipeline artifacts.
To pass the variable FOO from a job to another one in a different stage:
Create a folder that will contain all variables you want to pass; any folder could work, but something like mkdir -p $(Pipeline.Workspace)/variables might be a good idea.
Write the contents of the variable to a file, for example echo "$FOO" > $(Pipeline.Workspace)/variables/FOO. Even though the name could be anything you’d like, giving the file the same name as the variable might be a good idea.
Publish the $(Pipeline.Workspace)/variables folder as a pipeline artifact named variables
In the second stage, download the variables pipeline artifact
Read each file into a variable, for example FOO=$(cat $(Pipeline.Workspace)/variables/FOO)
Expose the variable in the current job, just like we did in the first example: echo "##vso[task.setvariable variable=FOO]$FOO"
You can then access the variable by expanding it within Azure Pipelines ($(FOO)) or use it as an environmental variable inside a bash script ($FOO).

Lock resources between multiple builds that run parallel in Azure DevOps

Suppose I have a build:
Job1:
Task1: Build
Job2:
dependsOn: Job1
Task2: Test
And the test task uses some kind of database, or another unique resource.
I would like to know if it is possible, when multiple builds are running in parallel, to lock Job2 to run unique without other builds trying to access the same resource.
I am using cmake and ctest, so I know I can do something similar between separate unit tests with RESOURCE_LOCK, but I am certain that I will not be able to lock that resource between multiple ctest processes.
Agree with #MarTin's work around. Set one variable with powershell task in Job1, then get this variable and use it in job condition for Job2.
You don't need to use API to add global variable. There has another easier way can for you try: Output variables. We expand one feature that customer can configure one output variable and access this output variable in the next job which is depending on the first job.
The sample of set output variable in Job1:
##vso[task.setvariable variable=firstVar;isOutput=true]Job2 Need skip
Then get the output variable from Job1 in Job2:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar']]
Then, in job2, you can use it into job condition:
condition: eq(dependencies.Job1.outputs['outputVars.firstVar'], 'Job2 Need skip')
The completed simple sample script look like this:
jobs:
- job: Job1
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'echo "##vso[task.setvariable variable=firstVar;isOutput=true]Need skip Job2"'
name: outputVars
- job: Job2
dependsOn: Job1
variables:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar'], 'Need skip Job2']
steps:
...
...
The logic I want to express is dynamically assign values to output variable based on Job1 and the current pipeline execution state. One specific value represent that the Job2 should be locked, which means skip its execution. In Job2's condition expression, when the obtained value $[dependencies.Job1.outputs['outputVars.firstVar'] satisfies the predefined expected value, skip current Job2.

Is it possible to conditionally set the artifact name in my Azure DevOps build pipeline "publish artifact" task?

I was wondering if it is possible to conditionally set the name of my build artifact in my Azure DevOps build pipeline "publish artifact" task? I want to produce different artifacts based on the input to my build pipeline. Say, based on the input pipeline variables, I want to produce one of three artifacts ("red", "blue", "green"). Is it possible to specify the artifact being produced in my "publish artifact" task based on the input variable, or is it easier/better just to produce three build pipelines?
Is it possible to conditionally set the artifact name in my Azure DevOps build pipeline “publish artifact” task?
I am afraid there is no such out of box way to do that. If you want to conditionally set the artifact name, we have to use the nested variables in the pipeline.
However, At this moment, the value of nested variables (like $(CustomArtifactName_$(Build.SourceBranchName))) are not yet supported in the build pipelines.
As workaround, you could add a Run Inline Powershell task to set the variable based on the input pipeline variables.
In my side, I use Build_SourceBranchName as input pipeline variables. Then I add following scripts in the Inline Powershell task:
- task: InlinePowershell#1
displayName: 'Inline Powershell'
inputs:
Script:
$branch = $Env:Build_SourceBranchName
if ($branch -eq "TestA5")
{
Write-Host "##vso[task.setvariable variable=CustomArtifactName]Red"
}
else
{
Write-Host "##vso[task.setvariable variable=CustomArtifactName]Blue"
}
Then in the Publish Build Artifacts task, I set the ArtifactName with drop-$(CustomArtifactName)
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
inputs:
ArtifactName: 'drop-$(CustomArtifactName)'
Hope this helps.
This is the bash version in case you run the task on Linux. The NAME is a custom variable which uses the pre-defined variable as value.
trigger_model_version=RELEASE_ARTIFACTS_${NAME^^}_BUILDNUMBER
export version=${!trigger_model_version}
echo Deploying ${NAME}:${version}