AzureDevops Pipeline Templates From Folder - azure-devops

I am trying to use templates in my pipeline that are in the same project folder but on Validate yam and run pipeline gives me an error
/templates/transform-settings.yml (Line: 1, Col: 1): A sequence was not expected
Here is the part of azure-pipelines.yml and template:
imagePullSecret: ' fd bfgbgf '
dockerfilePath: '**/Dockerfile'
vmImageName: 'ubuntu-latest'
testVar: 'test'
tag: '$(Build.BuildId)'
ConnectionStrings.DefaultConnection: ""
stages:
- template: templates/transform-settings.yml
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
.... and template:
- stage: TransformFiles
displayName: TransformFiles
variables:
- ${{ if eq(variables['Build.SourceBranchName'], 'development') }}:
- group: dev-secrets
- name: testVar
value: 'dev'
- name: ConnectionStrings.DefaultConnection
value: $(psql-conn-str-dev)
- ${{ if eq(variables['Build.SourceBranchName'], 'qa') }}:
- group: qa-secrets
- name: testVar
value: 'qa'
jobs:
- job: Transform_AppSettings
steps:
- bash: echo "===== Transforming appsettings.json for $(variables['Build.SourceBranchName']) environment ====="
displayName: 'File Transform'
- task: FileTransform#1
inputs:
folderPath: '$(System.DefaultWorkingDirectory)'
fileType: 'json'
targetFiles: 'appsettings.json'
- upload: appsettings.json
artifact: appsettings

/templates/transform-settings.yml (Line: 1, Col: 1): A sequence was not expected
Based on your yaml sample, the issue is related to the format of the YAML template.
To solve this issue, you need to add the stages: field at the top of the template YAML file.
For example:
azure-pipelines.yml
stages:
- template: templates/transform-settings.yml
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
transform-settings.yml
stages:
- stage: TransformFiles
displayName: TransformFiles
variables:
- ${{ if eq(variables['Build.SourceBranchName'], 'development') }}:
- group: dev-secrets
- name: testVar
value: 'dev'
- name: ConnectionStrings.DefaultConnection
value: $(psql-conn-str-dev)
- ${{ if eq(variables['Build.SourceBranchName'], 'qa') }}:
- group: qa-secrets
- name: testVar
value: 'qa'
jobs:
- job: Transform_AppSettings
steps:
- bash: echo "===== Transforming appsettings.json for $(variables['Build.SourceBranchName']) environment ====="
displayName: 'File Transform'
- task: FileTransform#1
inputs:
folderPath: '$(System.DefaultWorkingDirectory)'
fileType: 'json'
targetFiles: 'appsettings.json'
- upload: appsettings.json
artifact: appsettings
For more detailed info, you can refer to this doc: YAML template.

Related

Conditional dependson for multiple stage YAML

I have a YAML pipeline which contains some template files.
Within my pipeline, there are 4 stages that run in parallel to apply DSC. I then have a destroy task which i would like to run, only when all 4 tasks have ran successfully. When i try to add a depends on with a list:
dependsOn:
- Stage_A
- Stage_B
- Stage_C
- Stage_D
The error I get is:
The 'dependsOn' parameter is not a valid String.
My template YAML looks like:
...
stages:
...
- template: Apply-DSC.yml
parameters:
azureSub: '[sub]'
AutoAccountResourceGroup: 'rg'
AutoAccountName: 'aa'
environment: 'b1'
stageDependsOn: 'b1_apply'
- template: Destroy-Pipeline.yml
parameters:
azureSub: '[sub]'
terraformStorageAccountResourceGroup: 'rg'
terraformStorageAccountName: '[]'
terraformStorageContainerName: '[]'
terraformStorageRemoteStateKey: '[].tfstate'
environment: 'b1'
terraformEnvironmentFileName: 'B01'
dependsOn: 'Stage_A'
I have 4 stages within my Apply-DSC.yml
Stage_A
Stage_B
Stage_C
Stage_D
Question is, is this possible for my destroy stage to await a successful deployment of Stages A-D when using these stage templates?
Thanks.
Edit: Adding Destroy-Pipeline.yml
# Run & upload Terraform plan
parameters:
- name: azureSub
type: string
- name: terraformStorageAccountResourceGroup
type: string
- name: terraformStorageAccountName
type: string
- name: terraformStorageContainerName
type: string
- name: terraformStorageRemoteStateKey
type: string
- name: environment
type: string
- name: terraformEnvironmentFileName
type: string
- name: dependsOn
type: string
stages:
- stage: Destroy_${{ parameters.environment }}
dependsOn: ${{ parameters.dependsOn }}
jobs:
- deployment: '${{ parameters.environment }}_Destroy'
displayName: '${{ parameters.environment }} Destroy'
environment: '${{ parameters.environment }} destroy'
pool:
vmImage: windows-latest
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: 'drop'
name: 'Download_Terraform_code'
displayName: 'Download Terraform code'
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
inputs:
terraformVersion: '$(TerraformVersion)'
displayName: 'Install Terraform'
- task: TerraformCLI#0
inputs:
command: 'init'
workingDirectory: '$(Pipeline.Workspace)/Drop'
backendType: 'azurerm'
backendServiceArm: '${{ parameters.azureSub }}'
backendAzureRmResourceGroupName: '${{ parameters.terraformStorageAccountResourceGroup }}'
backendAzureRmStorageAccountName: '${{ parameters.terraformStorageAccountName }}'
backendAzureRmContainerName: '${{ parameters.terraformStorageContainerName }}'
backendAzureRmKey: '${{ parameters.terraformStorageRemoteStateKey }}'
allowTelemetryCollection: false
displayName: 'Terraform Init'
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
terraform workspace select $(WorkspaceEnvironment)
workingDirectory: '$(Pipeline.Workspace)/Drop'
displayName: 'Select Workspace'
- task: TerraformCLI#0
inputs:
command: 'plan'
environmentServiceName: '${{ parameters.azureSub }}'
commandOptions: '-destroy -var-file="./environments/${{ parameters.terraformEnvironmentFileName }}.tfvars" -input=false'
allowTelemetryCollection: false
workingDirectory: '$(Pipeline.Workspace)/Drop'
displayName: 'Plan Destroy'
- task: TerraformCLI#0
inputs:
command: 'destroy'
workingDirectory: '$(Pipeline.Workspace)/Drop'
environmentServiceName: '${{ parameters.azureSub }}'
commandOptions: '-var-file="./environments/${{ parameters.terraformEnvironmentFileName }}.tfvars" -input=false '
allowTelemetryCollection: false
displayName: 'Run Destroy'
I changed the type from string to object
parameters:
- name: dependsOn
type: object
default: []
Then within my template block i added the object like:
- template: Destroy-Pipeline.yml
parameters:
...
dependsOn: ['Stage_A', 'Stage_B' ...]

set and refer to variables in yaml

I have the following yml code that sets and refers to some variables as follows:
<one.yml>
- task: AzurePowerShell#4
displayName: 'Copy functions templates'
inputs:
azureSubscription: ${{parameters.serviceConnection}}
ScriptPath: ${{ parameters.root }}/Scripts/ReleaseManagement/CopyChildTemplatesToContainer.ps1
ScriptArguments: '-resourceGroupName ''${{ parameters.solutionAbbreviation}}-data-${{ parameters.environmentAbbreviation}}''
name: copyFunctionsTemplates
- powershell: |
Write-Host "##vso[task.setvariable variable=data_containerSASToken;isOutput=true]$(copyFunctionsTemplates.containerSASToken)"
Write-Host "##vso[task.setvariable variable=data_containerEndPoint;isOutput=true]$(copyFunctionsTemplates.containerEndPoint)"
displayName: 'set data output variables'
name: dataVariables
<two.yml>
stages:
- deployment: ${{ parameters.stageName }}_DeployResources
displayName: ${{ parameters.stageName }}_DeployResources
- stage: ${{ parameters.stageName }}
dependsOn: ${{ parameters.dependsOn }}
condition: ${{ parameters.condition }}
jobs:
- deployment: ${{ parameters.stageName }}_DeployResources
displayName: ${{ parameters.stageName }}_DeployResources
steps:
- template: one.yml
jobs:
- job: ${{ parameters.stageName }}_DeployFunctions
dependsOn: ${{ parameters.stageName }}_DeployResources
variables:
data_containerEndPoint: $[ dependencies.DeployResources.outputs['DeployResources.dataVariables.data_containerEndPoint'] ]
data_containerSASToken: $[ dependencies.DeployResources.outputs['DeployResources.dataVariables.data_containerSASToken'] ]
steps:
- ${{ each func in parameters.functionApps }}:
- template: three.yml
<three.yml>
steps:
- task: AzureResourceGroupDeployment#2
displayName: 'deploy ${{ parameters.name }} data resources'
inputs:
azureSubscription: ${{parameters.serviceConnection}}
resourceGroupName: ${{parameters.solutionAbbreviation}}-data-${{parameters.environmentAbbreviation}}
location: ${{parameters.location}}
csmFile: ${{ parameters.root }}/functions_arm_templates/${{ parameters.name }}/Infrastructure/data/template.json
csmParametersFile: ${{ parameters.root }}/functions_arm_templates/${{ parameters.name }}/Infrastructure/data/parameters/parameters.${{parameters.environmentAbbreviation}}.json
overrideParameters: -environmentAbbreviation "${{parameters.environmentAbbreviation}}"
-tenantId "${{parameters.tenantId}}"
-solutionAbbreviation "${{parameters.solutionAbbreviation}}"
-containerBaseUrl "$(data_containerEndPoint)functions/${{ parameters.name }}/Infrastructure/data/"
-containerSasToken "$(data_containerSASToken)"
deploymentMode: 'Incremental'
On enabling the debug mode while running pipeline, I see values printed for data_containerSASToken and data_containerEndPoint from the task 'Copy functions templates' however I see empty values from the task 'deploy data resources'. What am I missing?
Your problem may be in when you retrieve the output from the previous job:
data_containerEndPoint: $[ dependencies.DeployResources.outputs['DeployResources.dataVariables.data_containerEndPoint'] ]
That's looking for a prior job called DeployResources, but the prior job is actually called {{ parameters.stageName }}_DeployResources.

Execute Azure Devops job on a pool based on conditional parameter

I am trying to execute an Azure Devops Job on a specific pool based on a condition.
The goal is to switch between self-hosted agent and microsoft agent.
Here is the configuration:
parameters:
custom_agent: true
jobs:
- job: Test
displayName: Test job
- ${{ if eq(parameters.custom_agent, true) }}:
- pool:
name: mypool
demands:
- agent.os -equals Linux
- ${{ if eq(parameters.custom_agent, false) }}:
- pool:
vmImage: 'ubuntu-latest'
steps:
- task: npmAuthenticate#0
Any ideas ?
The below example solved my requirement
parameters:
- name: 'vmImage'
type: string
default: 'ubuntu-latest'
- name: 'agentPool'
type: string
default: ''
jobs:
- job: 'Example'
pool:
${{ if ne(parameters.agentPool, '') }}:
name: ${{ parameters.agentPool }}
${{ if eq(parameters.agentPool, '') }}:
vmImage: ${{ parameters.vmImage }}
steps:
- script: example
Another apporach to conditionally select pools if you use non-vm pools:
variables:
- ${{ if eq(parameters.custom_agent, true) }}:
- name: testJobPool
value: mypool
- ${{ if eq(parameters.custom_agent, false) }}:
- name: testJobPool
value: mypool_second
jobs:
- job: Test
displayName: Test job
pool:
name: $(testJobPool)
steps:
- task: npmAuthenticate#0
This has proved working.
We can specify conditions under which a step, job, or stage will run. We can configure the jobs in the pipeline with different condition entries, and set demands based on those conditions.
A skeleton version looks like this:
parameters:
- name: custom_agent
displayName: Pool Image
type: boolean
default: True
jobs:
- job: selfhostedagent
condition: eq(${{ parameters.custom_agent }}, True)
displayName: 'self_hosted agent'
pool:
name: Default
demands:
- Agent.Name -equals WS-VITOL-01
steps:
- script: echo self_hosted agent
- job: hostedagent
condition: eq(${{ parameters.custom_agent }}, False)
displayName: 'hosted agent'
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo hosted agent
Update1
In addition, we can configure task template, then use the template in the steps.
Result:
It looks like pool is not a valid property of a job type
Try switching your job to a deployment type:
jobs:
- deployment: Test
- ${{ if eq(parameters.custom_agent, true) }}:
pool:
name: mypool
demands:
agent.os -equals Linux
strategy:
runOnce:
deploy:
steps:

How to pass a list as build parameter to a YAML template in Azure DevOps Server 2019 (on-prem)?

For example, consider the following template (named xyz.yml, for example):
parameters:
projects: ['p1', 'p2', 'p3']
steps:
- ${{ each project in parameters.projects }}:
- task: VSBuild#1
displayName: Build ${{ project }}
inputs:
solution: ${{ project }}.sln
...
Now, suppose I have the following azure-pipelines.yml file:
...
steps:
...
- template: xyz.yml
parameters:
projects: ???
...
How can I feed the projects template parameter from a build variable? Suppose at the time of the build I want to request building just p1 and p3. How can I do it?
You could try to use stepList type parameter and pass the same parameter value to template.
For example:
main.yaml:
parameters:
- name: mySteplist
type: stepList
default:
- task: CmdLine#2
inputs:
script: |
echo Write your commands here
echo Hello world1
- task: CmdLine#2
inputs:
script: |
echo Write your commands here
echo Hello world2
trigger:
- none
steps:
- template: stepstem.yml
parameters:
buildSteps:
- ${{ parameters.mySteplist }}
# - template: stepstem.yml
# parameters:
# buildSteps:
# - bash: echo Test #Passes
# displayName: succeed
# - bash: echo "Test"
# displayName: succeed
# - ${{ parameters.mySteplist }}
- task: CmdLine#2
inputs:
script: |
echo Write your commands here
echo Hello world3
stepstem.yaml:
parameters:
- name: buildSteps # the name of the parameter is buildSteps
type: stepList # data type is StepList
default: []
steps:
- ${{ parameters.buildSteps }}
- task: CmdLine#2
inputs:
script: |
echo Write your commands here
echo Hello world tem
- script: echo "hello"
So, you could use VSBuild#1 tasks as default parameter value and could change it when queue build.
Check the following example:
#xyz.yml
parameters:
projects: []
steps:
- ${{ each project in parameters.projects }}:
- task: VSBuild#1
displayName: Build ${{ project }}
inputs:
solution: ${{ project }}.sln
...
...
#azure-pipelines.yml
steps:
- template: xyz.yml
parameters:
projects: ["p1", "p3"]

Parameterized variable names as task input in Azure Pipelines

I've been trying to make a YAML template that first uses the AzureKeyVault#1 task to get the value ofsome Azure KeyVault secrets, and then uses these secrets for the sqlUsername and sqlPassword in asqlAzureDacpacDeployment#1 task.
I want to make the names of the KeyVault secrets a parameter, so that this template can be used for many different situations.
I've successfully used this technique before, with an AzureKeyVault#1 task and then an AzurePowerShell#4 task, where the secret gets injected as an environment variable for the PowerShell script.
This is a dressed down version of the working template:
parameters:
- name: subscription
type: string
- name: keyVaultSecretName
type: string
- name: keyVault
type: string
jobs:
- job: Run_PowerShell_With_Secret
pool:
name: Azure Pipelines
vmImage: windows-latest
steps:
- task: AzureKeyVault#1
inputs:
azureSubscription: ${{ parameters.subscription }}
keyVaultName: ${{ parameters.keyVault }}
secretsFilter: ${{ parameters.keyVaultSecretName }}
- task: AzurePowerShell#4
inputs:
azureSubscription: ${{ parameters.subscription }}
ScriptPath: 'some_script.ps1'
azurePowerShellVersion: LatestVersion
env:
SECRETVALUE: $(${{ parameters.keyVaultSecretName }})
And here is the template where I can't get the same technique to work:
parameters:
- name: subscription
type: string
- name: keyVault
type: string
- name: sqlServerName
type: string
- name: sqlDatabaseName
type: string
- name: sqlServerAdminSecretName
type: string
- name: sqlServerPasswordSecretName
type: string
- name: dacpacName
type: string
- name: artifactName
type: string
jobs:
- job: Deploy_SQL_Database
pool:
name: Azure Pipelines
vmImage: windows-latest
steps:
- task: DownloadPipelineArtifact#2
inputs:
artifact: ${{ parameters.artifactName }}_artifacts
- task: AzureKeyVault#1
inputs:
azureSubscription: ${{ parameters.subscription }}
keyVaultName: ${{ parameters.keyVault }}
secretsFilter: '${{ parameters.sqlServerAdminSecretName }}, ${{ parameters.sqlServerPasswordSecretName }}'
- task: sqlAzureDacpacDeployment#1
inputs:
azureSubscription: ${{ parameters.subscription }}
ServerName: ${{ parameters.sqlServerName }}.database.windows.net
DatabaseName: ${{ parameters.sqlDatabaseName }}
sqlUsername: $(${{ parameters.sqlServerAdminSecretName }})
sqlPassword: $(${{ parameters.sqlServerPasswordSecretName }})
DacpacFile: $(Pipeline.Workspace)\${{ parameters.dacpacName }}.dacpac
I can get the template to work if I hardcode the secret names:
parameters:
- name: subscriptionName
type: string
- name: keyVault
type: string
- name: sqlServerName
type: string
- name: sqlDatabaseName
type: string
- name: dacpacName
type: string
- name: artifactName
type: string
jobs:
- job: Deploy_${{ parameters.sqlDatabaseName }}_Database
pool:
name: Azure Pipelines
vmImage: windows-latest
steps:
- checkout: none
- task: AzureKeyVault#1
inputs:
azureSubscription: ${{ parameters.subscriptionName }}
keyVaultName: ${{ parameters.keyVault }}
secretsFilter: 'SQLServerAdmin, SQLServerPassword'
- task: DownloadPipelineArtifact#2
inputs:
artifact: ${{ parameters.artifactName }}_artifacts
- task: sqlAzureDacpacDeployment#1
inputs:
azureSubscription: ${{ parameters.subscriptionName }}
ServerName: ${{ parameters.sqlServerName }}.database.windows.net
DatabaseName: ${{ parameters.sqlDatabaseName }}
sqlUsername: $(sqlServerAdmin)
sqlPassword: $(sqlServerPassword)
DacpacFile: $(Pipeline.Workspace)\${{ parameters.dacpacName }}.dacpac
Although this works for us for now, I find this sub-optimal. Is there any way I can make these parameterized variable names work?