Restricting Azure Pipeline decorators - azure-devops

I wanted to restrict azure pipeline decorator from injecting into all the pipelines. As we know once the decorators are installed it will be applied to all the projects in the organization. how can I restrict it to certain projects?
I used Projects API and got my project id and then i added the project id to my json template as a targettask and targetid.
"properties": {
"template": "custom-postjob-decorator.yml",
"targetid": "9146bcde-ef7b-492d-8289-b3dce5f750b0"
}
it didn't work, do we need to provide a condition in the decorator.yaml for restricting it to certain projects in the organization?
I have a few more questions.
now i don't want to run my decorator on Project B and i define like the below
steps:
- ${{ if in(variables['System.TeamProject'], 'ProjectB')) }}:
- task: ADOSecurityScanner#1 --> injected decorator
I ran this and it is still injecting the decorator to ProjectB.
as you said i have created two scenarios.
to skip if the task already exists, it works like a charm, but it did not show any messages about the skipping part. how can we tell the decorator is skipped in the pipeline?
steps:
- ${{ if eq(variables['Build.task'], 'ADOSecurityScanner#1') }}:
- task: CmdLine#2
displayName: 'Skipping the injector'
inputs:
script: |
echo "##vso[task.logissue type=error]This Step '$(Build.task)' is not injected. You dont need this in pipeline"
exit 1
i have used the condition from the microsoft documentation and also similar to the one above you mentioned above.
it did not skip the injector with just the conditional command and when i added the task(powershell) in the decorator and pipeline.yaml then it is skipped the decorator matching the tasks on both yaml files.
does it show or log any info if the decorator is skipped in the pipeline.
i observed it does display differently when the decorator is skipped showing me the false statement.
do we need to define anything on the pipeline.yaml file as well?
I ran with the conditions you had provided above and somehow the decorator is still getting injected into projectB. can you let me know where i am doing wrong.
here is my basic azure-pipeline.yaml file pasted below.
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'

Be sure to read these docs:
Expressions
Specify Conditions
Simplest solution is to add a condition to the pipeline decorator:
steps:
- ${{ if not(eq(variables['azure-pipelines-tfvc-fixparallel-decorator.skip'], 'true')) }}:
- task: tf-vc-fixparallel-log#2
The decorator can use the pre-defined variables, including System.TeamProject or System.TeamProjectId:
System.TeamProject The name of the project that contains this build.
System.TeamProjectId The ID of the project that this build belongs to.
For example:
steps:
- ${{ if in(variables['System.TeamProject'], 'ProjectA', 'ProjectB', 'ProjectC')) }}:
- task: tf-vc-fixparallel-log#2
You will need to include the steps: if you aren't already. For some reason returning an empty steps[] is fine, but returning nothing will result in a failing decorator.
When you want to exclude a list of projects, simply inverse the condition. So to exclude 'ProjectB':
steps:
- ${{ if not(in(variables['System.TeamProject'], 'ProjectB')) }}:
- task: ADOSecurityScanner#1 --> injected decorator
Your statement above explicitly inserts into ProjectB, nowhere else.
With regards to skipping the injector. The above -${{ if ... }}: completely removed the steps indented below it when the don't need to run. They will not appear at all.
You can also put a condition on the task itself, in that case it will show up in the timeline, but as skipped:
steps:
- task: ADOSecurityScanner#1 --> injected decorator
condition: not(in(variables['System.TeamProject'], 'ProjectB'))
You can combine the condition with a variable, add it to the expression:
steps:
- ${{ if not(or(in(variables['System.TeamProject'], 'ProjectB'),
eq(variables['skipInjector'], 'true')) }}:
- task: ADOSecurityScanner#1 --> injected decorator
As you can see you can and(..., ...) and or(..., ...) these things together to form expressions as complex as you need.
You can see the evaluation of your decorator in the top level log panel of your pipeline job. You need to expand the Job preparation parameters section. And make sure you queue the pipeline with diagnostics turned on.
When you used a condition on the task within the decorator, expand the log for the task to see the condition evaluation:
Looking at she screenshot you posted, two versions of the decorator are included, one with conditions and one without it. Make sure you uninstall/disable that other version.

Related

Triggering jobs with specific branch names( pattern) in azure devops

I have a set of jobs that needs to be triggered only when branches are named by certain pattern. (Eg: ctl-sw-v01.01.01-vbeta, ctl-sw-v01.11.01-vbeta, ctl-sw-v11.01.01-vbeta, ctl-sw-v01.01.21-vbeta). To make this generic, I have a pattern developed using regex '~ /^ctl-sw-v\d+\.\d+.\d+(-vbeta)?$/'. But I am finding it confusing as to how to specify this in the variables section in the yml file.
I was using as below:
trigger:
tags:
include:
- '*'
branches:
include:
- ctl-sw-v01.11.01-vbeta
pool:
vmImage: ubuntu-latest
variables:
isBranch: $[startsWith(variables['Build.SourceBranch'], 'refs/head/~ /^ctl-sw-v\d+\.\d+.\d+(-vbeta)?$/')]
jobs:
- job: A
condition: and(succeeded(), eq(variables.isBranch, 'true'))
steps:
- script: |
echo "hello"
- job: B
steps:
- script: |
echo "howdy"
The job A is being skipped continuously, I checked my regex, it produces a match.
What am I doing wrong here?
I don't think you can use regular expressions in the startsWith expression.
A solution would be to create a separate step (in a job before job A) where you evaluate the variable isBranch with a script. This creates more code, but the benefit is that you can independently test the regex in the future which might save maintenance cost in the long run.
Moreover, condition: and(succeeded(), eq(variables.isBranch, 'true')) booleans are written simply as True, not 'true'.
As Bast said, it is not supported to use regular expressions as part of condition expressions in Azure DevOps.
In addition,
The branches should begins with refs/heads/, but you are using refs/head/.
To avoid mistake like this, I suggest you to use Build.SourceBranchName instead of Build.SourceBranch. Build.SourceBranchName ignores the file structure of the branch and just search for the branch name.
Here is the example:
variables['Build.SourceBranch'], 'refs/heads/main'
variables['Build.SourceBranchName'], 'main'

AzureDevops stage dependsOn stageDependencies

How to create a multi-stage pipeline depending on a stage/job-name derived from a parameter whereas stages run firstly in parallel and eventually one stage that waits for all previous stages?
Here's what I've tried so far:
A multi-stage pipeline runs for several stages depending on a tool parameter in parallel, whereas dependsOn is passed as parameter. Running it in parallel for each tool waiting for the previous stage for the said tool works smoothly.
Main template: all wait for for all
- ${{ each tool in parameters.Tools }}:
- template: ../stages/all-wait-for-all.yml
parameters:
Tool: ${{ tool }}
stages/all-wait-for-all.yml
parameters:
- name: Tool
type: string
stages:
- stage: ALL_WAIT_${{ parameters.Tool}}
dependsOn:
- PREPARE_STAGE
- OTHER_TEMPLATE_EXECUTED_FOR_ALL_TOOLS_${{ parameters.Tool }}
Now there should be one stage that should only run once and not per tool, but it should only run after the individual tool stages are done. It can't be hardcoded as there are various tools. So I hoped defining the individual wait-stages in a prepare job would work out:
Main template: prepare-stage
- script: |
toolJson=$(echo '${{ convertToJson(parameters.Tools) }}')
tools=$(echo "$toolJson" | jq '.[]' | xargs)
stage="ALL_WAIT"
for tool in $tools; do
stageName="${stage}_${tool }"
stageWaitArray+=($stageName)
done
echo "##vso[task.setvariable variable=WAIT_ON_STAGES]${stageWaitArray}"
echo "##vso[task.setvariable variable=WAIT_ON_STAGES;isOutput=true]${stageWaitArray}"
displayName: "Define wait stages"
name: WaitStage
stages/one-waits-for-all.yml
stages:
- stage: ONE_WAITS
dependsOn:
- $[ stageDependencies.PREPARE_STAGE.PREPARE_JOB.outputs['waitStage.WAIT_ON_STAGES'] ]
whereas below error is shown:
Stage ONE_WAITS depends on unknown stage $[ stageDependencies.PREPARE_STAGE.PREPARE_JOB.outputs['WaitStage.WAIT_ON_STAGES'] ].
As I understand depends on can not have dynamic $[] or macro $() expressions evaluated at runtime. You can use template expressions ${{}} which are evaluated at queue time.
Guess I was overthinking the solution as eventually it was pretty obvious.
So first template can be called within a loop from the main template whereas it's executed as many times as tools we got. Second template shall be called once waiting on previous stages for all tools, whereas the job/stage prefix is known, only the tool name as postfix was unknown. So just add them in a loop directly in dependsOn..
Here you go:
stages:
- stage: ONE_WAITS
dependsOn:
- PREPARE_STAGE
- ${{ each tool in parameters.Tools }}:
- OTHER_TEMPLATE_EXECUTED_FOR_ALL_TOOLS_${{ tool}}
- ALL_WAIT_${{ tool }}

Conditionally copying folders in Azure Pipeline

When we perform uploads using our production pipeline I'd like to be able to conditionally upload the assets folder of our application. This is because it takes significantly more time to upload the contents of that folder and it rarely has any changes to it anyway.
I've wrote the "CopyFilesOverSSH" task as follows...
# Node.js with Angular
trigger:
- production
pool:
default
steps:
- task: CopyFilesOverSSH#0
inputs:
sshEndpoint: 'offigo-production-server'
sourceFolder: '$(Agent.BuildDirectory)/s/dist/offigo-v2/'
${{ if eq('$(UploadAssets)', 0) }}:
contents: |
!browser/assets/**
${{ if eq('$(UploadAssets)', 1) }}:
contents: |
!browser/assets/static-pages/**
!browser/assets/page-metadata/**
targetFolder: '/var/www/docker/DocumentRoot/offigo/frontend'
readyTimeout: '20000'
continueOnError: true
However, when the pipeline runs it completely ignores the rules either way and just uploads all contents of the assets folder. I can't work out why it is not working correctly, some help would be greatly appreciated...
If UploadAssets is a variable name you should use this syntax like here
${{ if eq(variables['UploadAssets'], 0) }}:
Going out on a limb and assuming your future deployment steps if using YAML Pipelines will also have this condition on what and how to deploy.
Would recommend to create two templates. The template would have all the steps outline and if steps are being reused would recommend templating them so they are only defined once.
The issue is here is how the variable is being declared. $() is at the macro level. To confirm download the logs by clicking the ellipses on a job that has been completed:
I'd guess you'd ratehr need at at runtime so $[variables.UploadAssets] .
Alternatively, when editing in the browser can click download full YAML file to see it as well though this won't have any variables passed in at runtime.
Feel free to ask any more questions in the comments as this stuff is never as straightforward.
Thanks Krzysztof Madej, I tried using
${{ if eq(variables['UploadAssets'], 0) }}:
But it still didn't resolve the issue, the variable just seemed to be undefined. Using the link DreadedFrost provided about templates I ended up adding a parameter into the pipeline like so...
# Node.js with Angular
trigger:
- production
pool:
default
parameters:
- name: uploadAssets # Upload assets folder?; required
type: boolean # data type of the parameter; required
default: true
steps:
- task: CopyFilesOverSSH#0
inputs:
sshEndpoint: 'offigo-production-server'
sourceFolder: '$(Agent.BuildDirectory)/s/dist/offigo-v2/'
${{ if eq(parameters.uploadAssets, false) }}:
contents: |
!browser/assets/**
${{ if eq(parameters.uploadAssets, true) }}:
contents: |
!browser/assets/static-pages/**
!browser/assets/page-metadata/**
targetFolder: '/var/www/docker/DocumentRoot/offigo/frontend'
readyTimeout: '20000'
continueOnError: true
This then adds a checkbox when I run the pipeline, so I can set whether or not to upload the folder.
I doubt this is the best way of doing it but it resolves the issue for now, I'll have a proper look at implementing templates in the future.

Azure build pipelines - using the 'DownloadSecureFile' task within a template

I have an Azure DevOps project with a couple of YAML build pipelines that share a common template for some of their tasks.
I now want to add a DownloadSecureFile task to that template, but I can't find a way to get it to work.
The following snippet results in an error when added to the template, but works fine in the parent pipeline definition (Assuming I also replace the ${{ xx }} syntax for the variable names with the $(xx) version):
- task: DownloadSecureFile#1
name: securezip
displayName: Download latest files
inputs:
secureFile: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
retryCount: 5
- task: ExtractFiles#1
displayName: Extract files
inputs:
archiveFilePatterns: ${{ variables.securezip.secureFilePath }}
destinationFolder: '${{ parameters.sourcesDir }}\secure\'
cleanDestinationFolder: true
The error occurs on the 'Extract File' step and is Input required: archiveFilePatterns, so it looks like it's just not finding the variable.
As a workaround, I could move the download task to the parent pipeline scripts and pass the file path as a parameter. However, that means duplicating the task, which seems like a bit of a hack.
Variables in dollar-double-curly-brackets are resolved at template expansion time. They are not the output of tasks.
Output variables from tasks are referenced by dollar-single-parentheses and they don't need to start with the word "variables."
So I believe the line you're looking for is like this, and it isn't affected by the template mechanism.
archiveFilePatterns: $(securezip.secureFilePath)

Azure YAML Get variable from a job run in a previous stage

I am creating YAML pipeline in Azure DevOps that consists of two stages.
The first stage (Prerequisites) is responsible for reading the git commit and creates a comma separated variable containing the list of services that has been affected by the commit.
The second stage (Build) is responsible for building and unit testing the project. This Stage consists of many templates, one for each Service. In the template script, the job will check if the relevant Service in in the variable created in the previous stage. If the job finds the Service it will continue to build and test the service. However if it cannot find the service, it will skip that job.
Run.yml:
stages:
- stage: Prerequisites
jobs:
- job: SetBuildQueue
steps:
- task: powershell#2
name: SetBuildQueue
displayName: 'Set.Build.Queue'
inputs:
targetType: inline
script: |
## ... PowerShell script to get changes - working as expected
Write-Host "Build Queue Auto: $global:buildQueueVariable"
Write-Host "##vso[task.setvariable variable=buildQueue;isOutput=true]$global:buildQueueVariable"
- stage: Build
jobs:
- job: StageInitialization
- template: Build.yml
parameters:
projectName: Service001
projectLocation: src/Service001
- template: Build.yml
parameters:
projectName: Service002
projectLocation: src/Service002
Build.yml:
parameters:
projectName: ''
projectLocation: ''
jobs:
- job:
displayName: '${{ parameters.projectName }} - Build'
dependsOn: SetBuildQueue
continueOnError: true
condition: and(succeeded(), contains(dependencies.SetBuildQueue.outputs['SetBuildQueue.buildQueue'], '${{ parameters.projectName }}'))
steps:
- task: NuGetToolInstaller#1
displayName: 'Install Nuget'
Issue:
When the first stages runs it will create a variable called buildQueue which is populated as seen in the console output of the PowerShell script task:
Service001 Changed
Build Queue Auto: Service001;
However when it gets to stage two and it tries to run the build template, when it checks the conditions it returns the following output:
Started: Today at 12:05 PM
Duration: 16m 7s
Evaluating: and(succeeded(), contains(dependencies['SetBuildQueue']['outputs']['SetBuildQueue.buildQueue'], 'STARS.API.Customer.Assessment'))
Expanded: and(True, contains(Null, 'service001'))
Result: False
So my question is how do I set the dependsOn and condition to get the information from the previous stage?
It because you want to access the variable in a different stage from where you defined them. currently, it's impossible, each stage it's a new instance of a fresh agent.
In this blog you can find a workaround that involves writing the variable to disk and then passing it as a file, leveraging pipeline artifacts.
To pass the variable FOO from a job to another one in a different stage:
Create a folder that will contain all variables you want to pass; any folder could work, but something like mkdir -p $(Pipeline.Workspace)/variables might be a good idea.
Write the contents of the variable to a file, for example echo "$FOO" > $(Pipeline.Workspace)/variables/FOO. Even though the name could be anything you’d like, giving the file the same name as the variable might be a good idea.
Publish the $(Pipeline.Workspace)/variables folder as a pipeline artifact named variables
In the second stage, download the variables pipeline artifact
Read each file into a variable, for example FOO=$(cat $(Pipeline.Workspace)/variables/FOO)
Expose the variable in the current job, just like we did in the first example: echo "##vso[task.setvariable variable=FOO]$FOO"
You can then access the variable by expanding it within Azure Pipelines ($(FOO)) or use it as an environmental variable inside a bash script ($FOO).