Passing parameters through nested templates (or declaring IF conditions on variables) - azure-devops

I would like to be able to pass a pipeline parameter all the way through my YAML pipeline without having to define a parameter in each and every YAML file.
Essentially I have a main YAML file which calls a stage YAML, that has multiple nested jobs YAML, which in turn calls nested steps YAML; essentially building up my pipeline as I should using templates: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops
Here's a tree list sample folder;
E:.
├───01_stage (many files per folder)
├───02_jobs (many files per folder)
├───03_steps (many files per folder)
└───...main pipeline files
Ideally I want to run an IF condition on checking out a repository depending upon the pipeline being PROD or NON-PROD. I am fine with defining this as a parameter, but I am also open to it being defined as a variable. As far as I'm aware; you can't use IF condition on variables.
This is fine
- ${{ if eq(parameters.pester, true) }}: # or even as variables['pester']
- name: pester
value: yes
This is not fine
- ${{ if eq(variables.pester, true) }}: # or even as variables['pester']
- name: pester
value: yes
The condition I want this to run is nest far below many templates, and it would be absolutely painful to have to re-code everything to confirm to the parameters value being declared and passed down in each file.
This is where I want it:
steps:
- ${{ if eq(parameters['masterTagged'], 'true') }}: # here
- checkout: masterTagged
displayName: Repo Tagged
- ${{ if ne(parameters['masterTagged'], 'true') }}: # here
- checkout: self
displayName: Repo Self
- template: /.pipelines/03_steps/ssh_install.yml
- template: /.pipelines/03_steps/tf_install.yml
parameters:
terraformVersion: ${{ parameters['terraformVersion'] }}
- ...many more templates
Here is my main YAML pipeline file:
parameters:
- name: artifactory_base
type: boolean
default: true
# ...many more params
- name: pester
type: boolean
default: true
- name: planDeploy
type: boolean
default: true
- name: useBackupAgent
type: boolean
default: false
- name: masterTagged # key param
type: boolean
default: true
name: Team2
pr: none
resources:
repositories:
- repository: masterTagged
endpoint: nationwide-ccoe
name: my-github-org/my-github-repo
type: github
ref: refs/tags/v2.0.3
trigger: none
variables:
- template: /.pipelines/config/sub-asdfasdf.config.yml
- template: /.pipelines/config/namingstd.config.yml
- ${{ if eq(parameters.artifactory_base, true) }}:
- name: artifactory_base
value: yes
# ...many more conditions
- ${{ if eq(parameters.pester, true) }}:
- name: pester
value: yes
- ${{ if eq(parameters.planDeploy, true) }}:
- name: planDeploy
value: yes
stages:
- template: /.pipelines/01_stage/lz_deploy.yml
parameters:
${{ if eq(parameters.useBackupAgent, false) }}:
pool:
vmImage: Ubuntu 18.04
${{ if eq(parameters.useBackupAgent, true) }}:
pool:
name: backupAgents
terraformVersion: $(TERRAFORM_VERSION)
Is it possible to set this masterTagged parameter and for it to filter all the way down without having to declare it each time?
Also; is it even possible to use variables instead of parameters in this manner (I understand that parameters expand before variables):
- ${{ if eq(variables.pester, true) }}: # or even as variables['pester']
- name: pester
value: yes
...if it is, have I been doing it wrong all this time?
Note:
I do understand that you can use a standard task condition on the checkout task (shown below); however, having a 'switch' on two tasks ruins the folder path of the checked out repository. Even though we're only checking out on repository, it adds another folder level to the $SYSTEM_DEFAULTWORKINGDIRECTORY. Doing it this way would require more re-coding on the current structure of my YAML piplines.
- checkout: masterTagged
condition: eq(variables['masterTagged'], 'true')
displayName: Repo Tagged
- checkout: self
condition: ne(variables['masterTagged'], 'true')
displayName: Repo Self
If I could, but I know it's not possible (as seen by other peoples requests), I would enable a parameter or variable on the repository reference:
resources:
repositories:
- repository: masterTagged
endpoint: nationwide-ccoe
name: my-github-org/my-github-repo
type: github
ref: ${{ parameters.repoRef }} # here

Is it possible to set this masterTagged parameter and for it to filter all the way down without having to declare it each time?
No, because parameters are “scoped” to the file they are defined with. This is due to them being expanded when the pipeline is first compiled. (See > Pipeline run sequence)
You can use IF conditions on variables, however you can’t use template expressions (wrapped with {{}}) on variables inside templates as the variables do not exist/have not been populated at the point of template expansion.
One option is just using the conditions on the checkout tasks as you suggested, and dealing with the extra folder level to the default working directory. I had to do something similar a while back, our solution was to copy the contents of the repo folder up a level into the default working directory.
Your other option is to do the checkout in the top level pipeline file. This will allow you to template the checkout step/s using the parameter without having to pass it all the way through the files. This is the option I would suggest as you do not have to deal with the folder structure issues of the first option.
This would look something like this:
parameters:
- name: masterTagged
default: true
type: boolean
resources:
repositories:
- repository: masterTagged
endpoint: nationwide-ccoe
name: my-github-org/my-github-repo
type: github
ref: refs/tags/v2.0.3
steps:
- ${{ if eq(parameters.masterTagged, true) }}:
- checkout: masterTagged
- ${{ if eq(parameters.masterTagged, false) }}:
- checkout: self
- template: ./path/to/template.yml
I hope this answers your question.

Related

Is it possible to change the deployment environment based on an output variable computed in the previous stage?

Here is a code snippet:
stages:
- stage: Apply
dependsOn: Plan
variables:
OVERRIDE_ADO_ENVIRONMENT: $[ dependencies.Plan.outputs['Plan.IsTerraformPlanEmpty.OVERRIDE_ADO_ENVIRONMENT'] ]
condition: and(succeeded(), ${{ parameters.terraform_apply }})
jobs:
- deployment: Apply
environment: ${{ coalesce(variables.OVERRIDE_ADO_ENVIRONMENT, parameters.ado_environment) }}
strategy:
runOnce:
deploy:
steps:
- template: start.yaml
- template: terraform_init.yaml
parameters:
I know the build variable OVERRIDE_ADO_ENVIRONMENT is declared correctly, because I can use it in the condition to skip the Apply stage completely.
However, this is incorrect. Even if the plan is empty, there could be a change in the terraform output variables. Therefore I must run the Apply logic always. However, there is no need for approvals in this case.
Therefore I would like to switch the environment to the one in the OVERRIDE_ADO_ENVIRONMENT build variable which is a special environment with no approvals.
However, trying to run this pipeline produces the following error message:
Job Apply: Environment $[ dependencies could not be found. The environment does not exist or has not been authorized for use.
From which I conclude we cannot use a build variable, albeit computed in a previous stage.
The question is - what is the least painful way to implement this logic? If at all possible.
Edit 1
I tried an approach where I create two stages with a condition that is affected by the output variable from the previous stage. However, I found out that:
The condition must be on the stage, not deployment job. Otherwise, the environment is applied even if the deployment job's condition disables it.
However, the condition on the stage does not see the build variables shared at the same level. Thus the condition is always false.
Here is my attempt to use this approach
parameters:
- name: terraform_apply
type: boolean
- name: ado_environment
- name: working_directory
- name: application
default: terraform
- name: apply_stages
type: object
default:
- name: ApplyNonEmptyPlan
displayName: Apply Non Empty Plan
tf_plan_tag: TF_NON_EMPTY_PLAN
- name: ApplyEmptyPlan
displayName: Apply Empty Plan
tf_plan_tag: TF_EMPTY_PLAN
ado_environment: Empty TF Plan
stages:
- ${{ each apply_stage in parameters.apply_stages }}:
- stage: ${{ apply_stage.name }}
displayName: ${{ apply_stage.displayName }}
dependsOn: Plan
variables:
TF_PLAN_TAG: $[ stageDependencies.Plan.Plan.outputs['IS_TERRAFORM_PLAN_EMPTY.TF_PLAN_TAG'] ]
condition: and(succeeded(), ${{ parameters.terraform_apply }}, eq(variables['TF_PLAN_TAG'], '${{ apply_stage.tf_plan_tag }}'))
jobs:
- deployment: ${{ apply_stage.name }}
environment: ${{ coalesce(apply_stage.ado_environment, parameters.ado_environment) }}
strategy:
runOnce:
deploy:
steps:
- template: start.yaml
Environment creation happens at compile time before run time. It doesn't support dynamic environment name. Hence, in the code snippet shared below, each element in
coalesce should be known(or hardcode) before you run the pipeline, it cannot depends on the output calculated from previous stage.
environment: ${{ coalesce(variables.OVERRIDE_ADO_ENVIRONMENT, parameters.ado_environment) }}
and
environment: ${{ coalesce(apply_stage.ado_environment, parameters.ado_environment) }}

For-Each an Object in Azure Devops pipeline?

I starting to write an appication in microservices and want to have a build step to push the image from my pipeline. For this at the moment I have 3 services to push:
- stage: build_and_publish_containers
displayName: 'Docker (:Dev Push)'
jobs:
- template: "docker/publish.yaml"
parameters:
appName: Authorization_Service
projectPath: "Services/AuthorizationService"
imageName: authorization-service
imageTag: ${{variables.imageTag}}
- template: "docker/publish.yaml"
parameters:
appName: Registration_Service
projectPath: "Services/RegistrationService"
imageName: registration-service
imageTag: ${{variables.imageTag}}
- template: "docker/publish.yaml"
parameters:
appName: Tennant_Service
projectPath: "Services/TennantService"
imageName: tennant-service
imageTag: ${{variables.imageTag}}
Even with only this 3 services (and I want to have much more) I have a lot of duplicated code here I want to reduce.
I tried it with an array and an each-function but I have several information here (name / path / imagename) and that could grow.
Is there a better way?
If that would be a programming language I would have an array of a data model, is that something that is possible in azure devops?
Or maybe could each information saved in a json file (so 3 files at the moment and growing) and azure could get all files and informations out of this?
you could check the example below to define your complex object nested loops in Azure pipelines. By the way, you could also look into the github doc for more reference.
parameters:
- name: environmentObjects
type: object
default:
- environmentName: 'dev'
result: ['123']
- environmentName: 'uat'
result: ['223', '323']
pool:
vmimage: ubuntu-latest
steps:
- ${{ each environmentObject in parameters.environmentObjects }}:
- ${{ each result in environmentObject.result }}:
- script: echo ${{ result }}

doing a task after a looping YAML template-ized azure devOps pipeline

I have a YAML Azure DevOps pipeline that loops through series of configurations, copying artifacts to various places. What I want to do is, after the looping is done, to do something else (I'd like to send an email, but the question is more general than that).
But I can't insert anything after the looping part of the YAML, at least not with any of the experiments I've tried. Here's the YAML that calls the YAML template, with a comment for where I'd like to do another step. How might I do this?
parameters:
- name: configuration
type: object
default:
- Texas
- Japan
- Russia
- Spaghetti
- Philosophy
trigger:
- dev
- master
resources:
repositories:
- repository: templates
name: BuildTemplates
type: git
stages:
- ${{ each configuration in parameters.configuration }}:
- template: build.yml#templates
parameters:
configuration: ${{ configuration }}
appName: all
# Where I'd like to have another task or job or step or stage that can send an email or perhaps other things
Just define a new stage:
stages:
- ${{ each configuration in parameters.configuration }}:
- template: build.yml#templates
parameters:
configuration: ${{ configuration }}
appName: all
- stage: secondStage
jobs:
- job: jobOne
steps:
- task: PowerShell#2

How can I pass one of group variables value as a template parameter?

I've got yaml pipeline that referes some templates.
I have variable group linked to this main yaml file and I want to pass one of its variables to the template.
It's simple when I want to "just use it" as bellow:
example:
stages:
- stage: Deployment
variables:
- group: My_group_variables
jobs:
- template: /templates/jobs/myJobTemplate.yml
parameters:
someParameter: $(variable_from_my_variable_group)
myJobTemplate.yml:
parameters:
- name: someParameter
default: ''
jobs:
- job: Myjob
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: cat ${{ parameters.someParameter}}
It does not cooperate when I want to have parameters validation like:
parameters:
- name: environmentName
type: string
values:
- Development
- Test
- UAT
- Production
Or when I want to use "service connection" name as a variable.
...
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: 'deploy'
kubernetesServiceConnection: ${{ parameters.kubernetesServiceConnection }}
namespace: ${{ parameters.kubernetesNamespace }}
manifests: ${{ variables.manifestFile }}
...
Does anyone know how should I use those variables with pre-validated parameters or service connections?
It's most probably an issue with the time of resolving values. Pre-defined parameters and service connections names are checked on compile-time, while values from $() are resolved during runtime.
I cannot use extends and variables in this template.
Maybe someone has a pattern for those kinds of usage?

Is there any possibilities to call AzDo templated with parameters?

I would like to run a few templates based on the initial value chosen from the parameter and as soon as the value is chosen then a template will be issued which will further ask for more parameters only required for that template.
Let's say in main azure-pipelines.yml if a user chooses dev then simply a template will be called. However, if a user chooses test then template create-stack-tst-template.yml will be issued but along with that, it should prompt the parameters needed for this template. Is it possible?
If not, is there any possibility to club all the parameters only needed for dev and the same for test. So that when the individual templates are called then clubbed parameter will be passed which is necessary for that template to run but not for others.
Is there any kind of segregation exists?
trigger:
- none
parameters:
- name: DeployToEnvType
displayName: |
Select the env type to be deployed
type: string
values:
- dev
- test
stages:
- ${{ if eq(parameters['DeployToEnvType'], 'dev' ) }}:
- template: templates/create-stack-dev-template.yml
- ${{ if ne(parameters['DeployToEnvType'], 'test' ) }}:
- template: templates/create-stack-tst-template.yml
parameters:
- name: ProjectName
type: string
- name: ImageSource
type: string
it should prompt the parameters needed for this template. Is it possible?
This is not possible. You need to provide all parameters and pass further only those which are needed by particular template.
trigger:
- none
parameters:
- name: DeployToEnvType
displayName: |
Select the env type to be deployed
type: string
values:
- dev
- test
- name: ImageSource
type: string
stages:
- ${{ if eq(parameters['DeployToEnvType'], 'dev' ) }}:
- template: templates/create-stack-dev-template.yml
parameters:
ProjectName: projectA
ImageSource: ${{ parameters.ImageSource }}
- ${{ if ne(parameters['DeployToEnvType'], 'test' ) }}:
- template: templates/create-stack-tst-template.yml
parameters:
ProjectName: projectA
ImageSource: ${{ parameters.ImageSource }}
If you need control at runtime you need to make corresponding runtime parameter and pass it down. If you want to have some values fixed you can just put them inline.