How to set Azure DevOps yaml variables conditionally based on parameter value - azure-devops

I am trying to set variables based on a parameter value in a yaml pipeline. It seems that I've read many other posts which show examples like the one below that the authors have said worked, but I cannot get past issues when trying to do something like below.
I've tried many variations on this example as well, too many to list here. Sometimes it will show 'values' as a duplicate key. In other cases I've been able to try and start a run and get the prompt with environment selection, but then opening the stage dialog throws a parse error.
Is there some sort of difference between variable declaration at the top of the file vs in a stage or job? That seems to be the difference that I notice when reading through other examples.
Ultimately what I'm trying to do is set the ServiceConnection variable value based on the value of the environment parameter.
parameters:
- name: environment
displayName: Environment
type: string
values:
- DEV
- TEST
pr: none
trigger: none
pool: PrivateAgentPool
variables:
- name: 'isMain'
value: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]
- name: 'buildConfiguration'
value: 'Release'
- name: 'environment'
value: ${{ parameters.environment }}
- name: 'ServiceConnection'
${{ if eq(variables['environment'], 'DEV') }}:
value: 'svcConnectionDev'
${{ if eq(variables['environment'], 'TEST') }}:
value: 'svcConnectionTest'

Looks like your solution is almost correct. Consider the below example.
parameters:
- name: region
type: string
default: westeurope
values:
- westeurope
- northeurope
variables:
${{ if eq(parameters['region'], 'westeurope') }}:
ServiceConnection: "svcConnectionDev"
${{ else }}:
enter code here
if you want to used this ServiceConnectionvar across you can do it just by calling $ServiceConnection

you could use bash with conditions:
steps:
- bash: |
echo "##vso[task.setvariable variable=ServiceConnection]svcConnectionDev"
condition: eq('${{parameters.environment}}', 'DEV')

Related

Azure Devops: Overwrite Environment Variable at runtime

I am trying to set up an environment variable that can be used from the Azure Devops Library that by default is set to null value. This part is working, but I want the development teams to be able to overwrite that value ad hoc. I don't want to add this manually to all environment variables and I would like to build a condition in my playbooks to say when var != "" rather then when: var != "$(rollback)"
Here is my config:
name: $(Date:yyyyMMdd)$(Rev:.r)
trigger: none
pr: none
resources:
pipelines:
- pipeline: my-ui
source: my-ui-ci-dev
trigger: true
variables:
- group: Dev
jobs:
- job: my_cd
pool:
vmImage: "ubuntu-20.04"
container:
image: "myacr.azurecr.io/devops/services-build:$(services_build_tag_version)"
endpoint: "Docker Registry"
steps:
- task: Bash#3
displayName: "My Playbook"
env:
git_username: "$(git_username)"
git_password: "$(git_password)"
config_repo: "$(config_repo)"
service_principal_id: "$(service_principal_id)"
service_principal_secret: "$(service_principal_secret)"
subscription_id: "$(subscription_id)"
tenant_id: "$(tenant_id)"
rollback: "$(rollback)"
source_dir: "$(Build.SourcesDirectory)"
env_dir: "$(Agent.BuildDirectory)/env"
HELM_EXPERIMENTAL_OCI: 1
inputs:
targetType: "inline"
script: |
ansible-playbook cicd/ansible/cd.yaml -i "localhost, " -v
When choosing to run the pipeline, I would like the developers to just go to Run > Add Variable > Manually add the variable and value > run pipeline
And then in the playbook the value is "" if not defined or if it is then show as the value they type. Any suggestions on how I can do this with AZDO?
You can do this more easily with a run-time parameter
parameters:
- name: rollback
type: string
default: ' '
variables:
- group: dev
- ${{ if ne(parameters.rollback, ' ') }}:
- name: rollback
value: ${{ parameters.rollback }}
The way this works in practice is:
the pipeline queue dialog automatically includes a 'rollback' text field:
if the developer types a value into the rollback parameter field,
that value is used to override the rollback variable
otherwise, the value from the variable group is used.
Note that you need to give the parameter a default value of a single space; otherwise the pipeline won't let you leave it empty.

"Configuring the trigger failed, edit and save the pipeline again" with no noticeable error and no further details

I have run in to an odd problem after converting a bunch of my YAML pipelines to use templates for holding job logic as well as for defining my pipeline variables. The pipelines run perfectly fine, however I get a "Some recent issues detected related to pipeline trigger." warning at the top of the pipeline summary page and viewing details only states: "Configuring the trigger failed, edit and save the pipeline again."
The odd part here is that the pipeline works completely fine, including triggers. Nothing is broken and no further details are given about the supposed issue. I currently have YAML triggers overridden for the pipeline, but I did also define the same trigger in the YAML to see if that would help (it did not).
I'm looking for any ideas on what might be causing this or how I might be able to further troubleshoot it given the complete lack of detail that the error/warning provides. It's causing a lot of confusion among developers who think there might be a problem with their builds as a result of the warning.
Here is the main pipeline. the build repository is a shared repository for holding code that is used across multiple repos in the build system. dev.yaml contains dev environment specific variable values. Shared holds conditionally set variables based on the branch the pipeline is running on.
name: ProductName_$(BranchNameLower)_dev_$(MajorVersion)_$(MinorVersion)_$(BuildVersion)_$(Build.BuildId)
resources:
repositories:
- repository: self
- repository: build
type: git
name: Build
ref: master
# This trigger isn't used yet, but we want it defined for later.
trigger:
batch: true
branches:
include:
- 'dev'
variables:
- template: YAML/variables/shared.yaml#build
- template: YAML/variables/dev.yaml#build
jobs:
- template: ProductNameDevJob.yaml
parameters:
pipelinePool: ${{ variables.PipelinePool }}
validRef: ${{ variables.ValidRef }}
Then this is the start of the actual job yaml. It provides a reusable definition of the job that can be used in more than one over-arching pipeline:
parameters:
- name: dependsOn
type: object
default: {}
- name: pipelinePool
default: ''
- name: validRef
default: ''
- name: noCI
type: boolean
default: false
- name: updateBeforeRun
type: boolean
default: false
jobs:
- job: Build_ProductName
displayName: 'Build ProductName'
pool:
name: ${{ parameters.pipelinePool }}
demands:
- msbuild
- visualstudio
dependsOn:
- ${{ each dependsOnThis in parameters.dependsOn }}:
- ${{ dependsOnThis }}
condition: and(succeeded(), eq(variables['Build.SourceBranch'], variables['ValidRef']))
steps:
**step logic here
Finally, we have the variable YAML which conditionally sets pipeline variables based on what we are building:
variables:
- ${{ if or(eq(variables['Build.SourceBranch'], 'refs/heads/dev'), eq(variables['Build.SourceBranch'], 'refs/heads/users/ahenderson/azure_devops_build')) }}:
- name: BranchName
value: Dev
** Continue with rest of pipeline variables and settings of each value for each different context.
You can check my post here : Azure DevOps pipeline trigger issue message not going away
As I can see in your YAML file, you are using this branch : 'refs/heads/users/ahenderson/azure_devops_build'.
I think some YAML files you are refering are missing from the branch defined as default in your build there :
Switch to your branch
I think I may have figured out the problem. It appears that this is related to the use of conditionals in the variable setup. While the variables will be set in any valid trigger configuration, it appears that the proper values are not used during validation and that may have been causing the problem. Switching my conditional variables to first set a default value and then replace the value conditionally seems to have fixed the problem.
It would be nice if Microsoft would give a more useful error message here, something to the extent of the values not being found for a given variable, but adding defaults does seem to have fixed the problem.
In our case it was because the path to the YAML file started with a slash: /builds/build.yaml
Removing the slash fixed the error: builds/build.yaml
In my case I had replaced the trigger in the yaml-file. The pipeline then does not now where to start.
# ASP.NET
# Build and test ASP.NET projects.
# Add steps that publish symbols, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/apps/aspnet/build-aspnet-4
trigger:
- "main" # <-- That was name wrong
For me it was this causing the problem...
The following pipeline.yaml causes the error “Configuring the trigger failed, edit and save the pipeline again”. It has to do with the environment name name: FeatureVMs.${{ variables.resourceName }}, if i replace ${{ variables.resourceName }} with something else e.g. FeatureVMs.develop then the error does not occur. The strange thing is, if i once save the pipeline with all triggers I want and a valid environment FeatureVMs.develop then it saves the triggers, if i then change it to what i actually want a dynamic environment resource selection FeatureVMs.${{ variables.resourceName }} then the error occurs but Azure Dev Ops the pipeline works as i expect it. So the workaround is to save it once without a variable and the triggers you want and then with the variable and live with the error on top of the pipeline
This causes the error
trigger: none
variables:
- name: resourceName
value: $(Build.SourceBranchName)
- name: sourcePipeline
value: vetsxl-ci
resources:
pipelines:
- pipeline: vetsxl-ci
source: vetsxl-ci
trigger:
branches:
include:
- develop
- feature/F*
- release/*
- review/*
- demo/*
- hotfix/H*
- tests/*
- test/*
stages:
- stage: Deploy
displayName: 'Deploy'
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
jobs:
- deployment: DeployVM
displayName: 'Deploy to develop VM'
environment:
name: FeatureVMs.${{ variables.resourceName }}
strategy:
rolling:
deploy:
steps:
- template: deploy.yml
parameters:
sourcePipeline: ${{ variables.sourcePipeline }}
This works without any errors.
trigger: none
variables:
- name: resourceName
value: $(Build.SourceBranchName)
- name: sourcePipeline
value: vetsxl-ci
resources:
pipelines:
- pipeline: vetsxl-ci
source: vetsxl-ci
trigger:
branches:
include:
- develop
- feature/F*
- release/*
- review/*
- demo/*
- hotfix/H*
- tests/*
- test/*
stages:
- stage: Deploy
displayName: 'Deploy'
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
jobs:
- deployment: DeployVM
displayName: 'Deploy to develop VM'
environment:
name: FeatureVMs.develop
strategy:
rolling:
deploy:
steps:
- template: deploy.yml
parameters:
sourcePipeline: ${{ variables.sourcePipeline }}

Azure DevOps conditional initialization of variables is not working

I am quite new in Yaml and AzureDevOps and I am trying to initialize variable based on condition
variables:
- name: DisplayName
${{ if eq('$(env)', 'wh') }}:
value: 'DisplayNAME-WH'
${{ if eq('$(env)', 'lw') }}:
value: 'DisplayNAME-LW'
Where my 'env' is variable passed via UI in Azure DevOps
Issue what I am getting that "DisplayName" stays still empty (is not initialized).
Can you help me out?
Thanks.
Azure DevOps conditional initialization of variables is not working
According to the Understand variable syntax:
In a pipeline, template expression variables (${{ variables.var }})
get processed at compile time, before runtime starts. Macro syntax
variables ($(var)) get processed during runtime before a task runs.
So, we could not use the $(env) in the ${{ if eq('$(env)', 'wh') }}. That because the syntax of ${{ if eq('$(env)', 'wh') }} is parsed before the syntax of $(env).
To resolve this issue, we could to defined the variable as Runtime parameters:
parameters:
- name: 'env'
default: 'wh'
type: string
values:
- wh
- lw
variables:
${{ if eq(parameters.env, 'wh') }}:
DisplayName: 'DisplayNAME-WH'
${{ if eq(parameters.env, 'lw') }}:
DisplayName: 'DisplayNAME-LW'
steps:
- script:
echo $(DisplayName)
The test result:

How can I invoke a YAML pipeline that has both variables and runtime parameters?

I have a scenario where I need to have both:
runtime parameters, so that the pipeline can be triggered manually from the UI, where users triggering it can choose from a predefined set of options (defined in YAML)
variables, so that the pipeline can be invoked via REST APIs
Regarding runtime parameters, I was able to create the following sample pipeline:
parameters:
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- ubuntu-latest
trigger: none
stages:
- stage: A
jobs:
- job: A
steps:
- pwsh: |
echo "This should be triggering against image: $MY_IMAGE_NAME"
env:
MY_IMAGE_NAME: ${{ parameters.image }}
When I run it, I can see the dropdown list where I can choose the image name and it is reflected in the output message of the PowerShell script.
Regarding variables, I have defined one called "image" here (notice the value is empty):
The idea now is to invoke the pipeline from REST APIs and have the image name replaced by the value coming from the variable:
{
"definition": {
"id": 1
},
"sourceBranch": "master",
"parameters": "{\"image\": \"windows-latest\" }"
}
In order to make the step print the value I'm passing here, I need to correct the environment variable in some way. I thought it would be sufficient to write something like:
env:
MY_IMAGE_NAME: ${{ coalesce(variables.image, parameters.image) }}
That's because I want to give the priority to the variables, then to parameters, so that in case none is specified, I always have a default value the pipeline can use.
However, this approach doesn't work, probably because we're dealing with different expansion times for variables, but I don't really know what I should be writing instead (if there is a viable option, of course).
What I also tried is:
env:
MY_IMAGE_NAME: ${{ coalesce($(image), parameters.image) }}
MY_IMAGE_NAME: ${{ coalesce('$(image)', parameters.image) }}
MY_IMAGE_NAME: $[ coalesce(variables.image, parameters.image) ]
MY_IMAGE_NAME: $[ coalesce($(image), parameters.image) ]
None of those are working, so I suspect this may not be feasible at all.
There is a workaround that I'm currently thinking of, which is to create two different pipelines so that those can be invoked independently, but while this is quite easy for me to accomplish, given I'm using a lot of templates, I don't find it the right way to proceed, so I'm open to any suggestion.
I tested and found you might need to define a variable and assign the parameter's value to it (eg. Mimage: ${{parameters.image}}). And define another variable(eg. Vimage) and assign $[coalesce(variables.image, variables.Vimage)] to it. Then refer to $(Vimage) in the env field of powershell task. Please check out below yaml.
parameters:
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- ubuntu-latest
trigger: none
stages:
- stage: A
jobs:
- job: A
variables:
Mimage: ${{parameters.image}}
Vimage: $[coalesce(variables.image, variables.Mimage)]
steps:
- pwsh: |
echo "This should be triggering against image: $env:MY_IMAGE_NAME"
env:
MY_IMAGE_NAME: $(Vimage)
Env field of powershell task is usually for mapping secret variables. You can directly refer to $(Vimage) in the powershell script: echo "This should be triggering against image: $(Vimage).
Note: To queue a build via REST API with provided parameters, you need to check Let users override this value when running this pipeline to make the varilabe to be settable at queue time.
Update:
You can try passing the variables to the parameters of the template to make the parameters for template dynamic. Please check below simple yaml.
jobs:
- template: template.yaml
parameters:
MTimage: ${{parameters.image}}
VTimage: $(Vimage)
template.yaml:
parameters:
MTimage:
VTimage:
jobs:
- job: buildjob
steps:
- powershell: |
echo "${{parameters.VTimage}}"
echo "${{parameters.MTimage}}"

how can I use IF ELSE in variables of azure DevOps yaml pipeline with variable group?

I'm trying to assign one of 2 values to a variable in addition to variable group and can't find the reference that how to use IF ELSE.
Basically I need to convert this jerkins logic to azure DevOps.
Jenkins
if (branch = 'master') {
env = 'a'
} else if (branch = 'dev'){
env ='b'
}
I found 1 reference from the following one, but this one seems to work if the variables section doesn't have variable groups.
https://stackoverflow.com/a/57532526/5862540
But in my pipeline, I already have a variable group for secrets, so I have to use name/value convention and the example doesn't work with the errors like expected a mapping or A mapping was not expected or Unexpected value 'env'
variables:
- group: my-global
- name: env
value:
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
env: a
${{ if eq(variables['Build.SourceBranchName'], 'dev') }}:
env: b
or
variables:
- group: my-global
- name: env
value:
${{ if eq(variables['Build.SourceBranchName'], 'master') }}: a
${{ if eq(variables['Build.SourceBranchName'], 'dev') }}: b
This code works.
I'm doing similar with parameters.
variables:
- name: var1
${{ if eq(parameters.var1, 'custom') }}:
value: $(var1.manual.custom)
${{ if ne(parameters.var1, 'custom') }}:
value: ${{ parameters.var1 }}
Update 09/09/2021
We have now natively if else expression and we can write it like
variables:
- group: PROD
- name: env
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
value: a
${{ else }}:
value: b
steps:
- script: |
echo '$(name)'
echo '$(env)'
Original reply
Syntax with template expressions ${{ if ...... }} is not limited only to job/stage level. Both below pipeline does the same and produce the same output:
stages:
- stage: One
displayName: Build and restore
variables:
- group: PROD
- name: env
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
value: a
${{ if eq(variables['Build.SourceBranchName'], 'dev') }}:
value: b
jobs:
- job: A
steps:
- script: |
echo '$(name)'
echo '$(env)'
variables:
- group: PROD
- name: env
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
value: a
${{ if eq(variables['Build.SourceBranchName'], 'dev') }}:
value: b
steps:
- script: |
echo '$(name)'
echo '$(env)'
Microsoft a few weeks ago released a new feature for YAML pipeliens that just lets you do that: IF ELSE notation.
https://learn.microsoft.com/en-us/azure/devops/release-notes/2021/sprint-192-update#new-yaml-conditional-expressions
Writing conditional expressions in YAML files just got easier with the use of ${{ else }} and ${{ elseif }} expressions. Below are examples of how to use these expressions in YAML pipelines files.
steps:
- script: tool
env:
${{ if parameters.debug }}:
TOOL_DEBUG: true
TOOL_DEBUG_DIR: _dbg
${{ else }}:
TOOL_DEBUG: false
TOOL_DEBUG_DIR: _dbg
variables:
${{ if eq(parameters.os, 'win') }}:
testsFolder: windows
${{ elseif eq(parameters.os, 'linux' }}:
testsFolder: linux
${{ else }}:
testsFolder: mac
I wanted to have runtime condition evaluation, something similar to compile time:
variables:
VERBOSE_FLAG:
${{if variables['System.Debug']}}:
value: '--verbose'
${{else}}:
value: ''
but unfortunately Azure devops does not supports special kind of functions like if(condition, then case, else case) - so I've played around and find out that it's possible do double string replacement using replace function. It does looks bit hacky of course.
For example, one may want to tweak task inputs depending on whether system debugging is enabled or not. This cannot be done using "standard conditional insertion" (${{ if … }}:), because System.Debug isn't in scope in template expressions. So, runtime expressions to the rescue:
- job:
variables:
VERBOSE_FLAG: $[
replace(
replace(
eq(lower(variables['System.Debug']), 'true'),
True,
'--verbose'
),
False,
''
)
]
steps:
- task: cURLUploader#2
inputs:
# …
options: --fail --more-curl-flags $(VERBOSE_FLAG)
Note that using eq to check the value of System.Debug before calling replace is not redundant: Since eq always returns either True or False, we can then safely use replace to map those values to '--verbose' and '', respectively.
In general, I highly recommend sticking to boolean expressions (for example the application of a boolean-valued function like eq, gt or in) as the first argument of the inner replace application. Had we not done so and instead just written for example
replace(
replace(
lower(variables['System.Debug']),
'true',
'--verbose'
),
'false',
''
)
then, if System.Debug were set to e.g. footruebar, the value of VERBOSE_FLAG would have become foo--verbosebar.
I think for now you're going to need to use a task to customize with name/value syntax variables and conditional variable values. It looks like the object structure for name/value syntax breaks the parsing of expressions, as you have pointed out.
For me, the following is a reasonably clean implementation, and if you want to abstract it away from the pipeline, it seems that a simple template for your many pipelines to use should satisfy the desire for a central "global" location.
variables:
- group: FakeVarGroup
- name: env
value: dev
steps:
- powershell: |
if ($env:Build_SourceBranchName -eq 'master') {
Write-Host ##vso[task.setvariable variable=env;isOutput=true]a
return
} else {
Write-Host ##vso[task.setvariable variable=env;isOutput=true]b
}
displayName: "Set Env Value"
As far as I know, the best way to have conditional branch build is using "trigger" in your YAML, instead of implementing complex "if-else". It is also much safer, and you have more explicit controls on the branch triggers instead of relying on CI variables.
Example:
# specific branch build
jobs:
- job: buildmaster
pool:
vmImage: 'vs2017-win2016'
trigger:
- master
steps:
- script: |
echo "trigger for master branch"
- job: buildfeature
pool:
vmImage: 'vs2017-win2016'
trigger:
- feature
steps:
- script: |
echo "trigger for feature branch"
To have trigger with branches inclusion and exclusion, you could use more complex syntax of trigger with branches include and exclude.
Example:
# specific branch build
trigger:
branches:
include:
- master
- releases/*
exclude:
- releases/1.*
The official documentation of Azure DevOps Pipelines trigger in YAML is:
Azure Pipelines YAML trigger documentation
UPDATE 1:
I repost my comment here with additional notes:
I was thinking to have different pipelines because having the complexity of juggling between CI variables is not more maintainable than having multi jobs in one YAML with triggers. Having multijobs with triggers is also enforcing us to have clear distinction and provision on branch management. Triggers and conditional branches inclusions have been used for a year by my team because of these maintainability advantages.
Feel free to disagree, but to me having an embedded logic in any scripted in any steps to check which branch is currently in session and then does any further actions, are more like ad-hoc solutions. And this has given my team and me maintenance problems before.
Especially if the embedded logic tends to grow by checking other branches, the complexity is more complex later than having clear separations between branches. Also if the YAML file is going to be maintained for long time, it should have clear provisions and roadmaps across different branches. Redundancy is unavoidable, but the intention to separate specific logic will pay more in the long run for maintainability.
This is why I also emphasize branches inclusions and exclusions in my answer :)
Azure YAML if-else solution (when you have a group defined which required name/value notation use thereafter.
variables:
- group: my-global
- name: env
value: a # set by default
- name: env
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
value: b # will override default
Of if you don't have a group defined:
variables:
env: a # set by default
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
env: b # will override default